Computational Antibody Papers

Filter by tags
All
Filter by published year
All
TitleKey points
    • Authors perform humanization of VHHs and generate experimental data confirming their designs.
    • The protocol involves grafting CDRs1-3 and then systematically modifying Hallmars/Verniers and others to make them more human.
    • Positions 49 and 50 (e.g., E49G, H50L in VHH1): These were generally well-tolerated, allowing for humanization without major impact on binding affinity or stability.
    • Position 52 (e.g., S52W in VHH2): In some cases, changing this residue even improved affinity.
    • Position 42: Humanizing residue F42 to a more human-like amino acid (e.g., F42V) in VHH2 led to a significant reduction in binding affinity. This residue plays a key role in stabilizing the CDR3 loop through interactions with other regions, making it essential for maintaining the bioactive conformation.
    • Position 52 (in some contexts): In VHH1, the mutation G52W led to a loss of binding due to steric clashes, demonstrating that this position can be critical depending on the structural context.
    • Measured binding affinities, expression yields, and purities of humanized variants. Crystal structures confirmed effects of humanization on binding; non-canonical disulfides stabilize CDR3
    • Novel humanization method, employing diffusion.
    • The model first learns the diffusion of a human sequence (with CDRs intact). The framework residues are diffused back. The network is then fine-tuned on mouse sequences.
    • There are two flavors of the model - one nanobody, another antibody.
    • They curate a great dataset from patents with over 300 sequences of paired humanized/native seuqences.
    • They demonstrate in silico and in vitro that their designs make sense.
    • An evolution of the ESM model family that scales up in terms of parameters, data, and computational power compared to ESM2, which allows it to improve on sequence, structure, and function representations of proteins.
    • ESM3 is a multimodal, bidirectional transformer that models sequence, structure, and function using discrete token representations for each modality. It merges these representations into a single latent space and is trained with a masked language model objective, allowing it to generate and predict across different modalities.
    • ESM3's largest model has 98 billion parameters.
    • The model was trained with 1.07 × 10²⁴ floating point operations (FLOPs) over a dataset of 771 billion tokens from 2.78 billion proteins.
    • Structural tokens in ESM3 are encoded by a discrete autoencoder that compresses three-dimensional protein structures into a sequence of discrete tokens. This is done by encoding local atomic environments around each amino acid and representing them in a simplified form that captures geometric properties.
    • There are a total of 4096 structural tokens to be had.
    • The structural autoencoder tokenizes protein structures by encoding local neighborhoods around each amino acid into discrete tokens. It uses a geometric attention mechanism that operates in local reference frames, based on bond geometry. This mechanism encodes and reconstructs the atomic structure, supervised by a geometric loss that preserves distances and orientations of bonds and atoms.
    • ESM3 can be used to generate novel sequences/proteins. generates protein sequences and structures by prompting the model with sequence or structural tokens. It uses iterative sampling, starting from a masked context, where tokens are predicted and unmasked progressively until a full sequence or structure is generated. This allows the model to create novel proteins that respect the given prompts or constraints.
    • The model was verified experimentally by generating novel proteins, including a green fluorescent protein, that was synthesized and tested for fluorescence in laboratory conditions. The novel protein had a sequence identity of 58% to the nearest known fluorescent protein.
    • Foundational model following in the footsteps of AlphaFold3 attempting prediction of molecular interactions.
    • Model architecture is closely modeled on this of AF3 - however it was not benchmarked against it (nor ESM3) because of use restrictions.
    • It takes 30 days on 128 A100s to train the model. Back of the envelope Google Colab pro+ pricing of 128 (which is NOT distributed) puts it at ca. 120k USD :)
    • Addressing antibodies, they introduce constraints (e.g. known epitope residue) to help the model out. Adding even a single residue makes a big difference with respect to baseline which is tantamount to global docking.
    • It only takes one residue as constraints to improve ab-ag complex prediction. Success rate for ‘local’ mode, with just one residue is about 50% in getting it with DockQ score >.21, 30% >.49 and less than 10% for high quality hits.
    • Success rate for ‘global’ mode, so without constraints is about 35% in getting it with DockQ score >.21, 20% >.49 and less than 5% for high quality hits.
    • So altogether if I want to ‘hit the epitope’, the model has ca. 30% success rate.
    • If I want a high quality ab-ag complex structure, unfortunately it seems that constraints do not help much currently.
    • GearBind - Novel framework to predict the effect of mutations on an antibody-antigen complex
    • The architecture is graph-based, trained in a contrastive fashion on real atoms and their surroundings versus randomly samples (from rotamer libraries) atoms within the same environment. They use the real proteins from CATH for this purpose. The random points are serving as ‘negatives’ for contrastive learning whereas the real ones as positives.
    • The method shows improvement on previous datasets: SKEMPI and the Absci HER2 dataset.
    • The authors demonstrated the effectiveness of the method by performing in silico affinity maturation on two existing binders.
  • 2024-10-24

    Benchmarking antibody generative models

    • generative methods
    • language models
    • binding prediction
    • Study evaluates a number of generative models on datasets of antibodies with reported affinities.
    • The methods tested were: MEAN, dyMEAN, IgBLEND, Ablang, Ablang2, AntiBerty, ESM, Antifold, ESM-IF, AbX, Diffab + their own version of Diffab.
    • Datasets used were the Absci HER2 datasets (100s of binders) and a number of datasets with tens of binders each.
    • All models have some correlation with the affinity data, though weak.
    • Adding epitope information is not a game changer, showing that information that is mostly captured is fitness of antibody first and antigen second, if at all.
    • Employing structural information helps as compared to purely sequence approaches.
    • Novel humanization software, allowing for rapid re-design of both heavy and light chains.
    • Unlike other tools such as Hu-mAb and Sapiens, which humanize heavy and light chains separately, Humatch jointly humanizes both chains, improving stability and reducing the risk of immunogenic epitopes between chains.
    • Humatch consists of three lightweight Convolutional Neural Networks (CNNs). Each CNN is trained for a specific task: one for heavy chains (CNN-H), one for light chains (CNN-L), and one for assessing natural heavy/light chain pairing (CNN-P). The CNNs are designed to output multiclass predictions for identifying human V-genes and classifying chain pairings.
    • The CNNs were trained on data from the Observed Antibody Space (OAS), which includes millions of human and non-human antibody sequences.
    • Humatch's performance was measured through precision-recall and ROC-AUC metrics, achieving near-perfect accuracy in classifying human and non-human sequences. Performance was also tested by humanizing 25 precursor antibodies and comparing the mutations with experimentally derived humanized versions, showing high overlap (77-82%) with experimental designs.
    • Authors describe how using a structure predictor one can re-design the binding site, to maintain binding.
    • They use a proprietary GaluxDesign method, the method achieves 1.4 Å Ca RMSD in predicting CDR-H3 loop structures, leveraging a unique scoring metric (G-pass rate) that assesses both confidence and structural consistency for antibody design.
    • The method outperforms AlphaFold 2.3, ABlooper, and ImmuneBuilder in predicting CDR-H3 loop structures, with significantly lower RMSD values (1.4 Å compared to 2.4-3.7 Å), particularly on a more challenging, time-separated dataset.
    • The binding propensity to HER2 was evaluated using a large mutant library and calculated via the G-pass rate, outperforming AlphaFold's PAE-based scoring. The model showed strong discrimination with an AUROC of 0.758, compared to 0.529 for AlphaFold. The novel loop is scored using their metric (G-pass rate) in complex with Her2.
    • Novel antibody sequences were designed by predicting six CDR loops in antibody-protein complexes, using GaluxDesign models. These designs were experimentally tested, achieving high success rates, including a 13.2% success rate for HER2 antibody designs using yeast display methods.
    • Authors demonstrate that using scores from DeepAb one can sort mutations in an antibody that improve affinity and a series of other properties.
    • The authors used the DeepAb structure prediction mode model to rank mutations based on their impact on structure prediction confidence, leading to the design of 200 novel anti-hen egg lysozyme (HEL) antibody variants.
    • Single-point mutations from a deep mutational scanning (DMS) dataset (Warszawski et al.) were combined into multi-mutation variants (up to 7 mutations), and these variants were selected based on DeepAb scores for experimental testing.
    • The designed variants were expressed and tested for thermostability, colloidal stability, and binding affinity to HEL.
    • Large percentage of the variants showed improved thermostability (91%) and affinity (94%), with 10% showing significant increases in binding affinity.
    • A subset of 27 high-performing variants was further tested for developability characteristics, including nonspecific binding, aggregation propensity, and self-association, ensuring their practical usability.
    • Novel language model applied to predicting antibody binding affinity in antigen-less manner.
    • AntiFormer is a graph-based large language model that combines sequence information with graph structures to predict antibody binding affinity. Its dual-flow architecture includes a transformer-based encoder for sequence features and a graph convolutional network (GCN) for capturing structural relationships (from sequence!), offering enhanced prediction accuracy.
    • AntiFormer was compared against advanced models like AntiBERTy and AntiBERTa, as well as basic transformer models with 6 and 12 layers, demonstrating superior performance across all evaluation metrics. It shows a better performance but not by a huge margin.
    • The model's performance was evaluated using affinity datasets, including the Observed Antibody Space (OAS) database and an additional dataset containing 104,972 antibody sequences with annotated affinity values, highlighting its accuracy and efficiency.