Benchmarking of a proprietary antibody design algorithm.
The method generates novel antibodies against a target, for a specific epitopic constraint & can be used to re-design antibodies.
Altogether they find good affinity scfv binders for six targets for which they found a complex in the PDB, like PD1 and Her2.
The de novo antibody design methods were computationally benchmarked against a curated set of 32 experimentally resolved antibody–antigen complexes using metrics like the G-pass rate and orientation recovery (measured by Fw RMSD). This allowed the authors to compare their method (across different versions) against other approaches.
They compare against RFAntibody and dyMEAN but in the computational tasks - reproducing the orientation of an existing antibody.
Several rounds of biopanning are employed to enrich for high-affinity, target-specific binders from a pre-designed library and do not involve the introduction of new mutations.
They benchmark the developability properties such as monomericity, yield and polyreactivity to show that their antibodies have good properties.
They demonstrate that most of their designed binders have less than 50% H3 sequence identity to antibodies in the PDB.
AbMAP - Language model transfer learning framework with applications to antibody engineering.
Authors address the process of dichotomy of language models in antibodies - either one uses a bare-bones protein model like ESM or only antibody model like Antiberty/IgLM. Normal protein models will not capture hypervariability of CDRs whereas antibody models would focus too much on the framework. They focus solely on CDRs + flanking regions as a solution.
They show their applicability to three off the shelf models with structure template finding as well as low-n generative modeling.
Benchmarking of the structure prediction/docking and co-folding methods for antibody design
Authors measure the impact of antibody-antigen model quality on the success rate of epitope prediction and antibody design.
For epitope prediction and antibody design they use a proxy measure of DockQ score - they call success when DockQ is better than 0.23, for antibody design they use a stricter threshold of 0.49.
Using these measures, AlphaFold3 comes out on top, and it would be successful roughly ~47% times.
THey introduce an approach where ProPOSE and ZDOCK decoys are refined using AlphaFold. With this combined protocol they reach success rates of 35% for epitope mapping and 30% for antibody design.
Novel inverse folding algorithm, studying the effect of pretraining on the effectiveness of Antibody design
Authors check multiple inverse folding regimens, pretraining on general proteins, ppi interfaces and antibody-antigen interfaces and likewise finetuning on these.
They only use the backbone atoms (N,C,Ca), with special provisions for Cb.
They mask portion of the sequence and have the model guess its amino acids.
The 37% recovery at 100% masking appears slightly lower than the same feat for proteinMPNN.
Pretraining on antibodies still holds a signal towards antibody-antigen complexes, showing the power of such pre-training.
Novel pipeline for computational protein design of nanobodies
Several tools are collated and adjusted to nanobody case - IgFold for structure prediction, HDOCK for docking and ABDESIGN, DiffAb and dyMEAN for backbone/sequence prediction.
They chiefly perform computational validation showing the performance on the RMSD/DockQ (re-docking) and the amino acid recovery. Results indicate that focusing on nanobodies provides benefit.
The entire pipeline can be used for de novo design and optimization.
Novel method for nanobody sequence re-design using quite a small network.
The model was pre-trained using a large-scale collection of nanobody sequences from the INDI dataset, heavy-chain antibody sequences from the OAS, and antibody complex structures from SabDab. For fine-tuning, affinity data was generated by with 17,500 nanobody–antigen interaction data points—7,500 generated via the ANTIPASTI model and 10,000 through random pairing—with a CD45 patent dataset used for testing. So all computational predictions are not real affinity points.
NanoGen uses a two-stage training framework with a shared encoder-decoder architecture based on CNN layers that learns sequence patterns via a Masked Language Modeling task. During generation, a guided discrete diffusion process, augmented with Discrete Bayesian Optimization, is employed to refine the sequence outputs for enhanced binding affinity.
The model was tested using sequence recovery (REC) and binding affinity improvement (pKD improvement). Benchmarking involved comparing NanoGen against baseline models such as ESM-2 650M, AbLangHeavy, and nanoBERT under both random masking and CDR-specific masking strategies on the CD45 patent dataset.
Proposal how to make antibody patents reasonable via mutational scanning.
If you develop a therapeutic antibody you want to claim a space around it so that no-one piggy backs off your effort by doing one substitution.
If you claim a ’homology space around your mabs’, then even a small amount of substitutions can circumvent 90-95% sequence identity of either CDRs or variable region.
Claiming that you own all antibodies that bind some protein (e.g. like Amgen did with pcks9) is too broad. That goes back to the ‘enablement’ of patents, as it needs to allow a skilled person to reproduce it. If you claim a handful of abs against pcks9, you do not exactly give a way to make ‘all others’.
Authors propose to make broader claims by point mutations in the CDRs in strategic paratope positions and characterizing binders. For a single lead you are looking at a ballpark 1,000 mutants, which is experimentally feasible. This would give hard data for a broad spectrum of binders around your candidates, giving wider protection.
Authors curated a dataset of antigen-specific antbiody seqeunces and fine tuned generic protein language model (don’t know which one) to it.
Dataset appears to be comprised mostly of plabdab and cov-abdab so very biased towards covid.
Antibodies are generated by prompting the model with the antigone sequence and generating the antibody on the basis of it.
Authors tested the generated antibodyes in the lab, including COVID antigens but also some that were less prevalent in the training set and they found binders.