CONTEXT

From human health to climatic change, many of our most pressing issues depend upon our ability to understand and impact biology. It is one of the greatest challenge faced by humanity, both essential to our survival and highly complex.

PROBLEM

Our incapacity to determine the behavior of biological processes seriously hinders development in bioengineering and biomedical research. We still can not precisely estimate the impact of biological perturbation on the outcomes.
In bio-pharma, this means pre-clinical research is not a very strong predictor of clinical outcomes, in terms of adverse reactions and efficacy.
This results in a 90% attrition rate for hit molecules, 12 years hits-to-market and costs of $2.6B per market therapy on average.
Pre-clinical Pharmaceutical research is not a very strong predictor of clinical outcomes, in terms of adverse reactions and efficacy.

SOLUTION


Recent advances in AI and Bioengineering are however opening the door to a revolution in life science research.
Our goal is to push that revolution forward : we want to make research fast and predictive.
To achieve that goal we are building a new type of integrated platform, leveraging cutting edge deep learning and bioengineering methods to provide combined high-fidelity in-vitro and in-silico models of human biology.

HOW ARE WE BUILDING IT ?


We believe true progress in our field can only be achieved with access to high quality self-generated data and continuous monitoring.
This is why unlike many other AI-enabled drug discovery companies, we iterate through experimental biology (wet lab) and computational modeling (dry lab) to test our results in a continuous, confirmatory loop.
We develop bio-organisms in-house, from human tissue research models to therapeutic cells.
By combining dry and wet lab capabilities, we can more confidently generate actionable insights on biology for rapid discovery of novel treatments and innovative products.
Building a platform to make programming cells easier, means improving the process of, performing, testing and modeling genetic modifications.
Our platform builds upon Stem Cells (iPSC), Genetic Engineering and Monitoring technologies to perform and test the outcome of biological interventions.


Modeling relies on a specifically designed pipeline of deep learning and programming technologies.
We generate computer representations of the cells and tissues, designed to simulate biological processes and suggest genetic perturbations.

PRODUCT STATUS


Insilico modelling platform : data extraction and aggregation models = 20%, deep learning pipeline under early development.
Software application: visual reactive prototype = 20%, backend system =5%, frontend under early development.
Biological disease models platform: cells and organoids characteristics and specific needs/improvements over existing models is being evaluated through end-users interviews. Development starting in the coming months in partnership with two academic partners the Institute for Regenerative Medicine and Biotherapy and the organ-on-chips lab at Institut Pasteur.

BUSINESS MODEL


If there is one thing that the recent events taught us it is how much collaboration was an absolute necessity, a critical tool for human progress and survival.
We don’t plan on reaching our goals alone, our business model is designed with collaborative, distributed innovation at it’s core.
We believe this approach, bringing together biotechnology, computer analytics and large scale collaborations in a coherent system will enable researchers to develop the next generation of bio-organisms, gain predictive power and radically transform biology.
Our model revolves around:
Technology Platform access fees
Shared IP through R&D collaborations
Licensing partnerships
Developing our own products

We focus our efforts on making sure we are building the perfect end-to-end platform for biology researchers by collaborating on R&D with labs, biotechs and big pharmas.

Future plans


Bio-Pharma Models
Insilico Models
Biological Models
Bio-Pharma Therapy
Predictive models in academic research
Predictive models in pre-clinical studies
Predictive models in clinical trials & precision medecine
Cell & Gene Therapy
Regenerative Medecine
Biotechnology

DEFENSIBILITY STRATEGIES


01 BIO-ENGINEERING
We create complex engineered cells and tissues for a variety of applications, from academic research to clinical trials : a highly defensible product in terms of expertise and IP protection.
02 AI & BIOINFORMATICS
Our graph analytics and deep learning platform is built on in-house research and development : it leverages combined expertise in Biology, Insilico modelling, Deep Learning and Graph Analytics with the objective of pushing the boundaries of what can be achieved. Most parts of our pipeline are not shared and provide high technology defensibility.
04 CHIP TECHNOLOGY
05 LEGAL & REGULATORY
06 ECOSYSTEM
We direct our efforts towards the development of an ecosystem business model. Part of that work is used to building a large network of academic and corporate partners . The platform itself is designed to benefit from economies of scale and sustain multiple research projects concurrently.

TEAM


Jimmy Le Vagueresse
Jimmy worked on both scientific and commercial aspects of the Pharmaceutical Industry. He is a former member of the Market Access department at Bristol Myers Squibb and a former Life Science Strategy consultant.
He carried out strategic consulting, pharmaceutical pipeline management and market access missions with several of the world's largest pharmaceutical companies, including Sanofi, BMS, Pfizer and Lundbeck.
He graduated in Neurobiology (BSc) and in Therapeutic Bio-engineering (MSc) and worked on research projects at the CNRS, UK’s National Institute for Medical Research and College de France.
He holds degrees from the London Business School (PhD 2 years - MSc), ESSEC (MBA), KEIO University (MBA), and is a former member of the Sanofi Chair in Therapeutic Innovation.
He studied and applied Graph theory (at UCL) and Agent-Based Simulations (at LBS) in research contexts and applies deep unsupervised learning methods to biology.
Anthony Lagnado, PhD
Anthony is a cellular and molecular biologist with an expertise in Stem Cells and Senescence.
He contributed significantly to the understanding of the role of mitochondria and telomeres dysfunction during cell senescence. He worked in several world-renowned institutions, including research in induced pluripotent Stem Cells at the Institute of Regenerative Medicine, with, skin tissue at Newcastle University and on aging biological processes with Dr. Passos, a pioneer in aging-related diseases at the Mayo Clinic. Among other contributions, he is involved in the development of unique methodologies for analyzing telomeres dysfunction using super-resolution microscopy and reporter systems that allow us to see the dynamics of telomeres damage in living cells. He develops deep learning computer vision methods applied to microscopy.
He holds a Marie Curie PhD in Biology, a Post-Doctorate in Cellular Senescence and a Master in Bioedical Engineering.

Johan Estellon PhD
Johan is an expert in genomics and bioinformatics, with proven track records of delivering and sustaining revenue and profit gains within highly mutating markets.
He is a senior commercial executive with a long experience applying commercial and scientific skills to the marketing and business development of several biotech companies, including Cephalon, APCure, Genostar and Congenica.
He worked on genomics business development for 11+ years, with a focus on Personalized Medicine, Bioremediation and Diagnostics markets and technologies.
He holds a PhD in Bioinformatics, MSc in therapeuthic bio-engineering and Business Management Msc.


TECHNOLOGICAL PLATFORM


Fig 1. The platform in a simplified form : Biology to Computer Model intervention-learning loop
We have a clear shared vision on what needs to be achieved, it’s not a unique vision anymore, a few companies started to work with similar closed-loop approaches : Zymergen, Gingko Bioworks, Recursion and Insitro.
These companies have been tremendously successful. this is unfortunately only a handful of companies in the sea of biotechs and AI companies focusing on partial solutions.
We believe there is much more to be done before biology at large can benefit : better integration of computation and bioengineering technologies, improved deep learning algorithms, real progress towards interpretable AI, improving the predictivity of stem cells engineering; and maybe even more importantly, finally allowing academic researchers, biotechs and pharmas to access and participate in a large ecosystem based on these principles.

Our platform builds upon Stem Cell (iPSC) Engineering, Monitoring technologies and Deep Learning modelisation to perform and test the outcome of biological interventions.

Biology

Stem Cells
Genetic engineering
Organoids
Multi-Organoids

Instrumentation

Microscopy / Imaging
Multi-Omics
Micro-Fluidics
Chips

Deep Learning & Insilico Models


PHASE I
A. Web based application : Frontend ( Visual Graph & Simulation, Tools ), Backend, Distributed Database
B. Whole Cell Modeling : Ontology, Graphs and Deep Learning
C. Deep learning models I : Sequencing Data
D. Deep learning models II : Computer vision
PHASE II
E. Deep learning models III : Integration & state prediction
F. Deep learning models IV : State transformations
G. Deep Learning models V : Application to Multi-Cellular models
H. Integration in a learning loop : Bio-Engineering, Data Generation, Learning and Model Building
PHASE III
I. Deep Learning Models VI : Cell & Gene Therapy
J. Deep Learning Models VII : Clinical predictive models
- Cohort selection
- Treatment outcome prediction
K. Deep Learning Models VIII : Precision medecine
L. Distributed Learning

I. Biology

II. Instrumentation

III. Deep Learning and Computation


The core models we use are designed to build computer representations of cells and tissues.
They are designed to simulate biological processes and suggest the biological perturbations needed to reach specific states.
Modelling relies on a specifically designed pipeline of deep learning and programming.

A. A pipeline designed to reduce distance between cell states


Deep learning applied to multi-omics expression data is still in its infancy, but the future is bright.
Many previously untestable hypotheses can now be interrogated as deep learning enables analysis of increasing amounts of data generated by new technologies.
For example, the effects of cellular heterogeneity on basic biology and disease etiology can now be explored by single-cell RNA-seq.
Given a set of observed cell types in control (i.e real cells) and simulation, we aim to :
Define the distance between different cells by segmenting over omics and visual expression
Predict the perturbation response of specific cells by training a model that learns to generalize the response of the cells in the training set
Reduce the distance between unwanted cell states and desired cell states

Cell states are defined by multi-omics and microscopy depending on the type of modelling :

I. Proteome, transcriptome, methylome and metabolome can be used at the cell or tissue level

To be completed

II. Microscopy

To be completed

Fig A.

Our aim is to identify and transform cell & tissue states.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.