Modern day oncology therapies have seen significant innovation in the last decade. It is high time we commit to using biomarkers that are driven by rational design and the latest computational methods.

Copyright by


SwissCognitiveIn an earlier age of medicine, new therapies were often discovered by ‘accident’. There was little technical knowledge of structure or function to guide the process of developing curative treatments. Trial and error dictated progress, resulting in slow and unpredictable successes. As our knowledge of small molecules, proteins and their structural relationships grew, we entered the era of rational drug design. Rational drug design has made a significant impact in the field of oncology , where we have gathered a deep knowledge of ligand binding and biochemical pathways. Modern day drug strategies utilise frameworks of rational drug design, driven by computational experimentation to further the pace of potential therapy identification.

In the early 2000s, for example, there was an unexpected race for a small molecule blocker for type I TGF receptor (TGFβ) kinase. Two groups, one led by Scott Sawyer, Eli Lilly, and the other by Juswinder Singh, Biogen-Idec, discovered an identical molecule via separate efforts.1,2 The Lilly team used conventional high-throughput screening (HTS) enzyme and cell assays, which were costly and time consuming. Independently, Singh’s team streamlined the discovery by employing computational methods to perform a ‘virtual screening’. This approach was faster, relatively less costly and enabled Biogen-Idec to garner an edge over Lilly. It was an early demonstration that computationally-guided design had the potential to prioritise or even replace expensive chemical and biological assays, minimising limitations and time to market. Since this time, databases of results from both low- and high-throughput studies have continued to explode, further enhancing our ability to rationally develop not only monotherapies, but also bispecific therapies and combination therapies.

The case for predictive biomarkers

The evolution of biomarker design is not so different from the evolution of drug design. Even with the most efficacious therapies, not all patients respond. Furthermore, when the process of matching patients with certain therapies goes wrong, adverse events can be costly and even deadly. For some time, the industry has worked to find biomarkers that provide predictive insight for matching patients to the right treatments. Historically, this meant identifying specific patient populations that should receive, or not receive, a therapy.

Early on, macroscale pathological characteristics were used to make treatment decisions for patients, including for cancer. Tumour grade, size and location were documented and statistics from the clinical results of many patients were used to make these generalisations of who should receive therapy and who should not. Histology, once available, provided additional insight, taking us one step closer to a molecular-level understanding of why certain patients respond and others do not. However, the world of medicine changed drastically with the completion of the human genome project and the advent of genomic medicine.

The era of genomic medicine

The outcome of the human genome project was not a static reference sequence, as is often cited. Rather, the advancements made during the milestone effort and shortly after its completion resulted in the birth of genomic medicine. Genomic medicine represents a major breakthrough and significant driver towards what we know as precision medicine, often defined as the right patient receiving the right treatment at the right time. Since the completion of the human genome project, the technology known as high-throughput sequencing or next-generation sequencing (NGS) has generated trillions of genomic sequences from cancer patient’s tumour tissue.[…]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


read more – copyright by