Listen to this story
|
Large language models or LLMs — while some view them as precursors to artificial general intelligence (AGI), others question the practical applications of these AI models in real-life situations. And since all eyes are on GPT-4 and its use-cases currently, an interesting new development has given the whole debate a fresh dimension.
A Twitter user, who goes by the name Cooper, recently claimed that GPT-4 helped save his dog’s life. The Twitter thread, which soon went viral, documents how the user ran a diagnosis on his dog using GPT-4 and how the LLM helped narrow down on the underlying issue that was troubling his Border Collie, named Sassy.
Though achieving AGI may still be years away, instances such as Sassy’s recovery demonstrate the potential practical applications of GPT-4.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
However, Sassy’s recovery with the aid of GPT-4 is not the only instance of AI being used in pet healthcare. There already exists a chatbot called PetGPT, which allows pet owners to diagnose their pet’s health issues. PetGPT generates a list of probable reasons for the animal’s illness based on species and symptoms.

Meanwhile, in the Twitter thread, Cooper also mentioned that “GPT-3.5 couldn’t place a proper diagnosis, but GPT4 was smart enough to do it.”

GPT-4 to the rescue
Although Sassy was anaemic, she was responding positively to the treatment for a tick-borne illness, which was her initial diagnosis. However, things took a turn for the worst. Worried, Cooper rushed her to the vet again, who, after a few more tests, ruled out any co-infections associated with tick-borne diseases. However, Cooper was not convinced.
Given that the vet was unable to determine the cause of Sassy’s deteriorating condition, Cooper decided to deploy GPT-4. He described Sassy’s condition in great detail to the AI model.

GPT-4 acknowledged that there could be an underlying issue contributing to Sassy’s anaemia. While GPT-4 dished out the possible reasons, Cooper was able to narrow down on the right one as Sassy had already undergone a few tests.
“The most impressive part was how well it read and interpreted the blood test results. I simply transcribed the CBC test values from a piece of paper, and it gave a step-by-step explanation and interpretation along with the reference ranges,” he said.
With GPT-4’s assessment, Cooper knocked on the vet’s door again. When he asked if immune-mediated hemolytic anaemia (IMHA) could possibly be a reason for Sassy’s deteriorating health, the vet agreed.n Soon, Sassy was put through a new set of tests and GPT-4’s diagnosis was confirmed! Now, with Sassy fully recovered, Cooper could not help but share his experience on the micro-blogging platform.
Real life use case for GPT-4
GPT-4 saving Sassy’s life does put it as a potential use case of the technology. Currently, AI is already being used by doctors across the globe as a tool to help in patient diagnosis. Most recently, doctors at MIT used AI to detect breast cancer in a woman four years ahead in time.
Also, Nuance Communication, which was acquired by Microsoft last year, announced a new clinical documentation tool that uses GPT-4. “Our state-of-the-art blend of conversational, ambient, and generative AI will accelerate the advancement of the care delivery ecosystem,” Mark Benjamin, CEO of Nuance, said in a statement.
In the realm of medical AI, the current direction is for AI to assist doctors in improving diagnosis rather than replacing them entirely. This is where GPT-4 has the potential to be of significance for it can be a great assistance tool for doctors.
“ChatGPT can be used to assist doctors with admin tasks such as writing patient letters, so doctors can spend more time on patient interactions. More importantly, chatbots have the potential to increase the effectiveness and accuracy of the processes for preventive care, symptom identification, and post-recovery care,” Tina Deng, principal medical devices analyst at GlobalData, said in a statement.
Although this development has momentarily silenced the critics looking for a significant use-case of the LLMs-based chatbots, such an ‘achievement’ is far from being fool-proof and can’t be blindly relied upon.
GPT-4 hallucinates too
Critics of the technology, as well the creators, have acknowledged that these chatbots can be prone to hallucinations. Sam Altman, founder of OpenAI, said, “ChatGPT (built on the GPT3.5 architecture) is incredibly limited, but good enough at some things to create a misleading impression of greatness.”
Yann LeCun, chief AI scientist at Meta and one of the most popular critics of the technology, said, “Large language models have no idea of the underlying reality that language describes. While the chatbot does a great job predicting the next text in the sequence, it does not really understand the context.”
Along the same line, Arvind Narayanan, an associate professor of computer science at Princeton, said that there is also a flipside to this development. “How many people put their symptoms into ChatGPT and got wrong answers, which they trusted over the doctor’s word? There won’t be viral threads about those,” he said. How true!