Decision-Making AI For The Enterprise

InstaDeep delivers AI-powered decision-making systems for the Enterprise. With expertise in both machine intelligence research and concrete business deployments, we provide a competitive advantage to our customers in an AI-first world.

Learn More

Building AI systems for the industry

Leveraging its expertise in GPU-accelerated computing, deep learning and reinforcement learning, InstaDeep has built AI systems to tackle the most complex challenges across a range of industries and sectors.

Biology Biology

Biology

Read More
Logistics Logistics

Logistics

Read More
Electronic Design Electronic Design

Electronic Design

Read More
Energy Energy

Energy

Read More

Latest

Our latest updates from across our channels

InstaDeep unveils near-exascale supercomputer ‘Kyber,’...

on Oct 24, 2024 | 02:24pm

Kyber increases InstaDeep’s compute power 10x, facilitating AI innovation.Powered by NVIDIA H100 GPUs, with ~0.5 exaFLOPs in FP16, ranking among the top 100 global AI clusters.I...

AI Day 2024: InstaDeep Showcases Innovations in Biology and AI as Part of the BioNTech Innovation Series

AI Day 2024: InstaDeep Showcases Innovations in Biology and AI as...

on Oct 09, 2024 | 10:40am

On 1st October 2024, over 100 members of the biotech community – including scientists, researchers, engineers and journalists – gathered at London’s tech hub, CodeNode, for...

AI Day

BioNTech Highlights AI Capabilities and R&D Use Cases at Ina...

on Oct 01, 2024 | 01:00pm

Provides updates on BioNTech’s strategy to scale and deploy AI-capabilities across the immunotherapy pipeline Highlights InstaDeep’s new near exa...

Sharing knowledge and accelerating innovation at Deep Learning In...

on Sep 11, 2024 | 05:56pm

Amid the vibrant energy of the Deep Learning Indaba 2024, InstaDeep proudly reinforced its commitment to advancing AI innovation across Africa by supporting a new edition of this...

InstaDeep introduces DeepPCB Pro: An AI-powered PCB design tool...

on | 03:50pm

San Francisco, CA – 12th September 2024 – InstaDeep, in collaboration with Google Cloud, unveiled today the advanced version of its AI-powered Printed Circuit Board (PCB) desi...

InstaDeep engages with the PCB design community to explore how AI...

on Jul 24, 2024 | 12:07pm

InstaDeep's goal of using AI to enhance speed and efficiency in the PCB design landscape is moving forward at pace. Our team recently showcased DeepPCB, our advanced cloud-based d...

InstaDeep at ICML 2024...

on Jul 18, 2024 | 02:04pm

The International Conference on Machine Learning (ICML) stands as one of the premier gatherings, bringing together the brightest minds from academia, industry, and research instit...

Syngenta and InstaDeep collaborate to accelerate crops seeds trai...

on Jun 18, 2024 | 01:30pm

This collaboration further strengthens the Syngenta Seeds R&D engine for speed, precision, and power, accelerating trait advancement.Large Language Models (LLMs) aim to reduce...

InstaDeep presents six papers at ICLR 2024...

on May 07, 2024 | 10:04am

InstaDeep maintains its strong commitment to open research with six papers accepted for presentation at the 2024 ICLR conference being held in Vienna this week. The accepted pa...

Building the next generation of AI models to decipher human biolo...

on Apr 30, 2024 | 03:41pm

The human genome, containing the entirety of our genetic blueprint, holds the keys to understand the intricate workings of our cells and bodies. How our cells respond to signals,...

Research

Protein Sequence Modelling with Bayesian Flow Networks

Timothy Atkinson | Thomas D. Barrett | Scott Cameron | Bora Guloglu | Matthew Greenig | Louis Robinson | Alex Graves | Liviu Copoiu | Alexandre Laterre

Sep 2024
Application of a Bayesian Flow Network (BFN) to protein sequence modelling. BFN’s update parameters of data distribution, 𝜃, using Bayesian inference given a noised observation, y of a data sample. When applied to protein sequence modelling, the distribution over the data is given by separate categorical distributions over the possible tokens (all amino acids and special tokens such as , , and ) at each sequence index. During training, ‘Alice’ knows a ground truth data point x, and so 𝜃 can be directly updated using noised observation of x. ‘Bob’ trains a neural network to predict the ‘sender’ distribution from which Alice is sampling these observations at each step (i.e. to predict the noised ground truth). During inference, when Alice is not present, Bob replaces noised observations of the ground truth with samples from the ‘reciever’ distribution predicted by the network.

SMX: Sequential Monte Carlo Planning for Expert Iteration

Edan Toledo | Matthew Macfarlane | Donal John Byrne | Siddarth Singh | Paul Duckworth | Alexandre Laterre

ICML 2024 Jul 2024
Figure 1: Diagram depicting a representation of SMX search from left to right. N Rollouts are executed in parallel according to πθ (the sampling policy β). At each step in the environment the particle weights are adjusted, indicated by the particle sizes. We depict two resampling zones where particles are resampled (favouring higher weights) and weights are reset. Finally an improved policy π ′ = Iˆ βπ is constructed from the initial actions from the remaining particles, furthest to the right. This improved policy is then used to update πθ.

Multi-Objective Quality-Diversity for Crystal Structure Prediction

Hannah Janmohamed | Marta Wolinska | Shikha Surana | Aaron Walsh | Thomas Pierrot | Antoine Cully

Gecco 2024 Jul 2024

Overconfident Oracles: Limitations of In Silico Sequence Design Benchmarking

Shikha Surana | Nathan Grinsztajn | Timothy Atkinson | Paul Duckworth | Thomas D. Barrett

ICML 2024 workshop Jul 2024

Generative Model for Small Molecules with Latent Space RL Fine-Tuning to Protein Targets

Ulrich A. Mbou So | Qiulin Li | Dries Smit | Arnu Pretorius | Oliver Bent | Miguel Arbesú

ICML 2024 workshop Jul 2024
Figure 1. Schematic representation of our model’s architecture. A sequence of N tokens is passed as input to our encoder which is a transformer model. The output encoded embeddings of shape N × E are either passed directly to the mean and logvar layers (path 1) or they are first passed to the perceiver resampler layer which maps the encoded embeddings to a reduced dimension of shape LS ×LE (path 2). The mean and logvar layers are linear layers that are applied independently to each sequence dimension. The final reparametrised embeddings are then passed to the decoder transformer model to be used as encoder embeddings in the decoder’s cross-attention layers.

Should we be going MAD?
A Look at Multi-Agent Debate Strategies for LLMs

Andries Petrus Smit | Nathan Grinsztajn | Paul Duckworth | Thomas D Barrett | Arnu Pretorius

ICML 2024 Jul 2024
Recent advancements in large language models (LLMs) underscore their potential for responding to inquiries in various domains. However, ensuring that generative agents provide accurate and reliable answers remains an ongoing challenge. In this context, multi-agent debate (MAD) has emerged as a promising strategy for enhancing the truthfulness of LLMs. We benchmark a range of debating and prompting strategies to explore the trade-offs between cost, time, and accuracy. Importantly, we nd that multi-agent debating systems, in their current form, do not reliably outperform other proposed prompting strategies, such as self-consistency and ensembling using multiple reasoning paths. However, when performing hyperparameter tuning, several MAD systems, such as Multi-Persona, perform better. This suggests that MAD protocols might not be inherently worse than other approaches, but that they are more sensitive to different hyperparameter settings and difcult to optimize. We build on these results to offer insights into improving debating strategies, such as adjusting agent agreement levels, which can signicantly enhance performance and even surpass all other non-debate protocols we evaluated. We provide an open-source repository to the community with several state-of-theart protocols together with evaluation scripts to benchmark across popular research datasets.

In the Press

Partners