Artificial intelligence (AI) has always seemed like a thing of the future, but it has already become a part of our daily lives. With its impressive speed and depth of knowledge, it’s no wonder that millions of people worldwide are utilizing ChatGPT and many other AI tools to answer their questions.
But can ChatGPT be dangerous?
One study found an appalling 52% of ChatGPT answers to be incorrect, and 77% of responses were overly wordy and confusing. The study also found that, unfortunately, users are very drawn to AI answers due to the eloquence in the writing.
So, yes, ChatGPT can be dangerous, especially to pet owners.
When you combine the unreliability of AI answers with the public’s love of short and concise information, you get a recipe for disaster.
Keep reading to learn about pet owner safety regarding AI tools and how to protect your pet.
Large Language Models
Large Language Models (LLMs) such as ChatGPT, Siri, and Alexa have become popular sources of quick answers and recommendations.
However, as convenient as these AI tools may be, people should exercise caution and not unquestioningly trust the information provided.
These models are sophisticated algorithms trained on vast amounts of data to generate human-like text responses.
While they can be incredibly helpful in providing general information, they have limitations and biases that pet owners should be aware of.
The pitfalls of AI tools for pet owners
Unfortunately, AI tools still present many issues for pet owners and non-pet owners alike. The information they provide is not necessarily reliable for various reasons.
One pitfall of AI tools is simply a lack of expertise. These tools are trained using massive amounts of data, but that data may not be from credible sources or vetted by professionals.
This means the information may be incorrect or misleading, which is especially dangerous regarding healthcare.
Just like you shouldn’t trust a random article to make medical decisions for your pet, you shouldn’t trust AI to do the same.
All information on the Internet—where LLMs get their data—should be taken with a grain of salt, especially regarding your pet’s health.
AI tools are good at identifying patterns and generating text similar to what they have been trained on.
However, this can lead to trendy advice that may not be based on scientific evidence and may not apply to all pets. AI tools have been shown to struggle with understanding humor and sarcasm, so they could also misinterpret jokes as facts.
One struggle that AI tools have is understanding the nuances of language. They may misinterpret your query, resulting in inaccurate or irrelevant information.
When can ChatGPT be helpful?
Some types of questions are more reliable than others when using AI.
While it’s still imperative that you always check other sources, asking very straightforward questions like “Can dogs eat grapes?” should result in correct answers.
AI tools work best with very objective facts, and in this example, you would be hard-pressed to find a source that says “yes” to that question.
Even better would be using ChatGPT for non-medical questions related to your pet, such as requesting toy recommendations.
Unfortunately, most questions are very subjective, and the answer will depend on your pet.
Factors like a pet’s age, weight, breed, and behavior can all play a role in determining the best medical advice for your furry friend. All medical questions should be brought to a veterinarian.
ChatGPT mistakes have consequences
While not related to pets, an embarrassing and costly situation happened in court thanks to ChatGPT.
Two personal injury lawyers arguing on behalf of their client presented past cases and citations to back up their case, only to be told by the judge those sources were completely fabricated.
It turns out that these lawyers used ChatGPT to do research for their case and didn’t bother to verify the information.
When the judge could not find records of these case examples, the lawsuit was tossed out, and the lawyers were issued a $5,000 fine.
Undoubtedly, all other personal injury lawyers have likely taken note and will steer clear of AI tools.
For this article, we asked ChatGPT for an example of a pet owner who received incorrect information from ChatGPT (we know — very meta).
Here is its response:
“A notable example of the potential risks associated with relying solely on LLM advice occurred when a pet owner sought guidance on managing their dog’s dietary needs. The LLM suggested a homemade diet consisting primarily of raw meat and bones. Unfortunately, the pet owner followed this advice without consulting a veterinarian, leading to severe nutritional deficiencies and health complications for the dog.”
The issue with this response? There’s no way to verify if it’s true.
While it’s a nice anecdote demonstrating the dangers of AI tools for pet owners, no source is cited, and the story is likely fabricated.
There are plenty of other ChatGPT fails you can read about, proving that AI tools can fail at even simple things like counting and math.
So, while they can be used for fun, they should not be used to get medical advice (or legal).
Stay vigilant
It can be tempting to ask ChatGPT your pet-related questions instead of going down a research rabbit hole or calling your vet, but your pet’s health is at stake. LLMs can regurgitate vast amounts of data but are no replacement for a health professional.
Some new AI tools are emerging just for pet owners, such as Pearl, where you can have your answers verified by a vet.
Remember—your vet’s office is the best place to ask pet-related questions. It’s better to be safe than sorry.
Sharon Feldman is a safety and legal writer based in San Diego. She can be found at the beach with her dog Noodles when not writing.