Blog

July 10, 2023

Anja-Vanessa Peter

5 min read

Gerry Liaropoulos (Director of Data Science and Bioinformatics at Intelligencia.ai) and Jan Przybylowicz (Investment Analyst at MTIP) speak to Anja-Vanessa Peter (Communication Manager at MTIP) about the inner workings of ChatGPT and implications for the world of medicine and data science.

The healthcare ecosystem is faced with a variety of challenges as it strives to deliver quality healthcare. Patient surges, an increasing lack of qualified personnel, advances in medical science and information, increasing quality requirements, and rising healthcare costs result in the need to come up with better methods for effectively managing modern systems. The introduction of new technologies and innovations by healthtech startups can help address these systemic challenges. Out of all emerging technologies, artificial intelligence is perhaps the most promising field that attracts the most of attention. According to CB Insights, global AI funding reached $45.8 billion in 2022. 34 startups were able to raise billion-dollar rounds, totaling the number of AI unicorns to 166.

In the recent months, there has been an unprecedented buzz around the newest iterations of ChatGPT — an extremely versatile chatbot publicly released by OpenAI in November 2022. Gerry Liaropoulos (Director of Data Science and Bioinformatics at Intelligencia.ai) and Jan Przybylowicz (Investment Analyst at MTIP) speak to Anja-Vanessa Peter (Communication Manager at MTIP) about the inner workings of ChatGPT and implications for the world of medicine and data science.

Anja: Gerry, can you explain how does ChatGPT work?

Gerry: GPT stands for “Generative Pre-Trained Transformer“ and it’s applicable on tasks like understanding language as we call it NLP — “Natural Language Processing“. ChatGPT is not the first model to use GPT, it’s actually based on iterations of GPT models that were fine-tuned from OpenAI and also other institutions and companies have developed GPT models. GPT models in general are transformer-based neural networks. It’s a concept where we try to simulate how our brain works. That’s why we call it neural networks. It has deep layers with many parameters who capture all the specificalities of a task. A task would be understanding the language in our case and the transformer architecture is what we use in the GPT models. The message has to go through an encoder and decoder. The high-level encoder part takes some input and turns it into a more low-level understanding. It takes a complicated task and makes it easier to understand for the model itself while the decoder actually does the opposite. It takes that input — obviously with some other mechanisms — and creates a more refined output. So you have the encoder part and the decoder part to generate the output. What’s more interesting about ChatGPT is how it differs from other GPT models. Particularly noteworthy is the extensive scale of the model, with an impressive set of 175 billion parameters which has been trained on a large corpus of text from let’s say Wikipedia, scientific journals, articles etc. After this long process of training, ChatGPT has been trained on various questions and answers, generated by AI trainers. The AI trainer would start providing questions and answers to the model in a way that aligns with their desired responses for future questions. So that’s one addition to a general GPT model, we call it supervised fine-tuning. Supervised because both the question and the answer are given by the humans directly.

Another addition to this has been what they call in OpenAI the reinforcement learning from human feedback. After the previous process has been done, the AI trainers would prompt the model to give answers to a question and rank them in a way that they would like to see the most correct or relevant answer being prioritized. As a result, the model would understand where to put more weight for future questions. That’s the high-level approach for creating the ChatGPT model. When a user goes to the OpenAI website and interacts with ChatGPT by writing a question, that prompt text goes to a pre-processing pipeline that transforms this text into tokens. We can call tokens the fundamental units of text used in NLP models. These tokens are then passed to another process called “embedding“, where they are transformed into vectors which are basically numbers. So we go from words to numbers. This then goes through the transformer architecture encoder / decoder. The output is then again numbers that will be transformed back into tokens and then we have the text being generated as the potential reply to the user. That’s how the ChatGPT model works.

Gerry Liaropoulos, Director of Data Science and Bioinformatics at Intelligencia.ai

Anja: Focusing on MTIP’s particular interest — Jan, how can ChatGPT and similar models be utilized in healthcare? What are the opportunities and challenges?

Jan: As Gerry mentioned, ChatGPT is essentially an LLM. LLMs are called “foundation models” because there are driving a paradigm shift in AI. Why is it so? Foundation models are sizable AI models that undergo training on vast amounts of unlabeled data. This approach produces versatile models that excel in tasks like image classification, natural language processing, and question-answering with impressive accuracy. On the other hand, most of the current AI models are very much task-specific.

However, latest advances in foundation model development can disrupt these narrow AI legacy solutions, leading to the development of what Michael Moor et al. called Generalist Medical AI (GMAI) models in their newly published article in Nature. GMAI models differ from conventional medical AI models in three key ways. Firstly, adapting a GMAI model to a new task is simple, with dynamic task specification allowing models to perform new tasks without retraining. Secondly, GMAI models can accept inputs and produce outputs using varying combinations of data modalities. Thirdly, GMAI models will formally represent medical knowledge, enabling them to reason through new tasks and use medically accurate language to explain their outputs.

Corporations like Google and Microsoft are already tapping into LLMs and generative AI for healthcare applications. Google’s Med-PaLM 2, a medical LLM, is being tested by selected Google Cloud customers, exploring uses like answering medical questions and analyzing unstructured texts. Microsoft’s Azure Health Bot template, integrating Azure OpenAI Service, allows healthcare organizations to answer questions with validated sources. A survey conducted by Huma.AI found that 86% of medical affairs leaders see generative AI as beneficial to life sciences companies, enhancing engagement and ensuring safe product use.

Focusing specifically on more commercially available ChatGPT, I can see how it can be linked to other systems via APIs to exponentially improve their capabilities. For example, ChatGPT can be utilized in telemedicine for streamlining consultations, transcribing records, and suggesting diagnoses and treatments. It can also bolster patient engagement through personalized messages, reminders, and follow-ups, improving satisfaction and retention. ChatGPT can also provide mental health support, serve as a conversational partner, and answer general health questions. Additionally, its translation capabilities can bridge communication gaps, fostering a more inclusive healthcare system.

The opportunities presented by medical foundation models are vast and varied. I have a genuine belief that AI will take medical practice to a new level and decrease global disparities by improving accessibility and affordability of care. There is an immense value in bringing specialist knowledge to scale. Hopefully, by partially automating complex clinical and administrative duties, doctors will be able to devote more time to engaging with patients on a deeper, personal level. Adopting a patient-centered approach not only lessens the symptoms patients experience, but also enhances diagnostic effectiveness, compliance with treatment plans, and recuperation.

On the other hand, foundation models in medicine also face a number of challenges. Validation, verification, and explainability are difficult to obtain due to models’ versatility coupled with black box nature. Privacy is another challenge, as models can expose or misuse sensitive patient data. AI also tends to perpetuate human biases, which can harm already marginalized populations. Moreover, the increasing scale of foundation models raises issues regarding data collection and environmental impact. Finally, we can also expect many challenges related to regulating how modern medicine should be practiced. Will doctors remain ultimate decision-makers, or will the primary responsibility for patient welfare shift to AI providers?

To sum up, ChatGPT and similar foundation models will be highly disruptive in many fields, including medicine. I think that start-ups focused on narrow AI solutions will retain competitive edge for some time due to domain expertise, predictability, and regulatory approval, but merging with foundation models will be necessary in many cases to unlock new, exponentially more powerful capabilities.

Jan Przybylowicz, Investment Analyst at MTIP

Anja: On the other hand, how valuable is ChatGPT in data science?

Gerry: Data science is a broader field that also encapsulates what we call the NLP, Natural Language Processing, and the LLM, Large Language Models like ChatGPT. As previously mentioned, ChatGPT deals mostly with text but it is also very useful when we are posing data science questions and ChatGPT can come up with potential answers. We have seen that we cannot just ask ChatGPT questions, but we can also give it code snippets and ask what these code snippets do or even find bugs in our code so that’s actually truly amazing from the data science perspective. Not only can ChatGPT understand a general context of a question but also understand where potential bugs might be in the code. This helps us improve the code by finding the bugs, by writing a documentation and by even optimizing the code. We have seen instances where ChatGPT optimizes a code of software engineers and data scientists. So that’s wonderful. Also, ChatGPT can provide us with codes, especially in tasks that are more repetitive. Let’s say common code blogs, instead of the data scientist or software engineer going and writing the code himself from scratch or scrolling through resources to find an optimal solution, there are instances where ChatGPT can provide with a best practice code that might need some further fine-tuning to be applied to our own process. Again, it really enhances the productivity of a data scientist or software engineer.

Another way we can play with ChatGPT to leverage our work is that we can ask it for common data science solutions or approaches to a problem or even ask to find us relevant data sources for our problem and basically integrate that new data set to our solution and come up with more robust answers. Thirdly, I think it can really seek best practices to enhance the impact of the data science function on the overall business. Now regarding the potential pitfalls, not all the answers are 100%. We should always check if for example the code works or if the code is fitted to our needs and there are obviously cases where we might go back and re-write parts of it but it’s definitely very helpful not to start from scratch but start from something that’s like 80% there and move it to 100%. Also, ChatGPT cannot be very specific to the domain that we might be doing our work on and obviously even more so to the specific data set we are looking for.

Anja: You almost answered my last question: Does AI have the potential to completely replace the work of a data scientist?

Gerry: I think data scientists using AI will replace data scientists not using AI. That could also be the case in many other fields. As a practitioner in any field, you have to keep up to date with recent developments and it’s so much more useful to use the tools that will increase your productivity. In a competitive market, the ones that are more productive will tend to stay longer and be more successful. From my personal point of view, I do not see AI as a threat, rather as an opportunity to really do more across borders. In data science, I think AI can really leverage our knowledge to take best practices from the industry, but until now at least, there hasn’t been a case where we can upload our own data set and ask ChatGPT to perform the various tasks. Also, the good data scientist has to really understand the data, not only from a number’s perspective, but also has to understand what’s driving the data generation process. ChatGPT and all Large Language Models — actually all AI models — do not have a sense of logic of how the data are generated or created. They are more association-linked models and that’s totally fine but there is no common sense or critical thinking integrated into them. So far, we haven’t been able to implement this to any AI model. What we are trying to do is basically teach AI models to react like we do and learn from us. I think the key in any discipline is staying ahead of the curve, using whatever tools there are to be better and more productive and maintaining awareness of the latest developments but I don’t think it will ever completely replace data science’s personal function.

Anja: Is there anything else you would like to say regarding AI and Intelligencia?

Gerry: At Intelligencia, we have been pioneering the use of AI in drug development since 2017. AI is at the core of our category defining platform for assessing risk in drug development and informing portfolio strategy and management decisions. We firmly believe that making data-driven decisions and utilizing data-based tools can significantly propel the industry’s progress and velocity, and ultimately bring better therapies faster to patients. The recent advancements and accomplishments of AI models like ChatGPT have been evoking the interest of the industry as well as of our team. These developments not only raise awareness about the boundless possibilities of AI but also enhance our startup’s productivity across various realms. To explore the potential of ChatGPT further, we have assembled a dedicated team, passionately delving into its capabilities and how it can contribute to our overarching goals. The dynamic nature of the start-up ecosystem necessitates staying ahead of the curve to maintain relevance, and this pursuit of staying at the forefront continues to captivate and inspire our team.

Anja: Thank you Gerry and Jan for providing insightful answers to my questions! It is evident that AI holds immense potential, but it also presents significant challenges that cannot be ignored. I am looking forward to the advancements and discoveries that the future holds in this dynamic field.


Learn more about the author:

Learn more

Download Article as PDF:

Download

Share the article:

LinkedIn Button

Media

Trends and insights
from our health experts.