Skip to content
More
Share
Explore
Blog

icon picker
Why ChatGPT can not be your science partner?

Unless you live under a rock, you probably heard about ChatGPT and how it will change the world around us. Like any new technology, we tend to hype it with the media. ChatGPT is passing the USMLE, ChatGPT is writing scientific articles, ChatGPT will kill google, etc. We also tend to believe that this technology will take over the world and end our jobs. You will see experts and thought leaders around the world hyping it more. Congratulations, it seems like ChatGPT next task is to cure cancer......
It is not the first time that we are caught by the hyper of AI and it is impact on our society. It happened in the past when DeepMind developed an AI system that beat the best Go player in the world. While this was an amazing achievement that was behind some of the progress in the field that we have today, still it did not lead to people losing their jobs because of that. The same situation here. Transformers models and Large Language Models (LLM) are a huge step toward achieving amazing performance for AI in different tasks but we still have a long way to even match human intelligence or creativity.
Recently, ChatGPT passed USMLE exam and other medical and professional exams. If you are a doctor, passing the board exam is an achievement but does not make you a good doctor. It makes you a human who can retain information well and knows how to take the exam.
Let us try to ask ChatGPT a deep science question.
Me: “how can we target TET2 mutations in acute myeloid leukemia”
ChatGPT:
There are several strategies that can be used to target TET2 mutations in acute myeloid leukemia (AML). Some of these include:
Targeted therapies: Researchers are currently developing drugs that specifically target TET2 mutations. One example is the drug guadecitabine (SGI-110), which is designed to target AML cells with TET2 mutations and has shown promise in clinical trials.
Immunotherapy: Immune checkpoint inhibitors like anti-PD1/PDL1 antibodies can be used to stimulate the immune system to attack AML cells with TET2 mutations.
Gene editing: CRISPR-Cas9 technology can be used to edit the TET2 gene in AML cells and restore its normal function.
Chemotherapy: Although chemotherapy is not specific to TET2 mutations, it can still be a useful treatment option for AML patients with TET2 mutations. However, the risk of relapse is higher in these patients.
Allogeneic stem cell transplantation: This treatment involves replacing the patient's diseased blood-forming cells with healthy ones from a donor. It can be an effective treatment for AML, including for those with TET2 mutations, but carries a risk of serious complications and is used primarily in young and fit patients.
This text may seem impressive to someone who is not in the science field, but it is misguided and incorrect. Currently, there is no targeted therapy available for TET2 mutations. SGI-110 was an experimental agent and never became a treatment for AML or TET2 mutations. Immune therapy generally has limited effects in AML, and none of the PD1/PDL1 inhibitors are approved or will be approved for AML treatment. While gene editing technology is promising, it has not been tested or shown to affect TET2, etc.
You get the point. Nicely written text but completely useless.
Interestingly, I asked ChatGPT why it is not good at answering science questions and this was the answer:
ChatGPT, developed by OpenAI, is a cutting-edge language model that has been trained on a massive amount of text data. However, while it has achieved impressive results in various language-related tasks, it is not well-suited for scientific applications.
First and foremost, ChatGPT's training data is not specifically focused on scientific domains. Instead, it is trained on a wide range of text sources, including news articles, websites, and social media. As a result, it may not have the depth of knowledge needed to accurately and effectively address complex scientific questions.
Additionally, the language model is not equipped to handle the nuances and specialized language used in scientific fields. It may generate responses that are technically accurate, but lack the precision and specificity that is essential for scientific discourse.
Moreover, ChatGPT has not been specifically designed to be transparent or explainable, which is a critical requirement for scientific applications. In science, it is important to be able to understand the reasoning behind a model's predictions in order to assess its reliability and validity. However, the neural networks used by ChatGPT are not easily interpretable, making it difficult to evaluate the accuracy of its responses.
Finally, the training data used to develop ChatGPT is biased, as it reflects the biases and limitations of the texts it was trained on. This can lead to the generation of inaccurate or misleading responses when used in scientific contexts.
In conclusion, while ChatGPT is an impressive language model, its limitations and lack of domain-specific training make it unsuitable for scientific applications. Scientific research requires precise language, interpretable models, and unbiased data, all of which are currently lacking in ChatGPT.
Great answer. I have to honestly say that ChatGPT gave an amazing answer here.
Therefore, although transformers and LLMs are new and exciting milestones in the AI world, we must not forget that we are still at the beginning and human knowledge and creativity, particularly in science, remains unparalleled

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.