The Hidden Dangers of ChatGPT and How to Prevent Artificial Hallucinations

Akinwande Komolafe
4 min readFeb 23, 2023

--

Are you one of the many people who use ChatGPT for various tasks? ChatGPT, an artificial intelligence language model developed by OpenAI, is used by many for tasks such as text generation, language translation, and more. While ChatGPT is an incredibly powerful tool, there is a danger that many users may not be aware of: hallucinations. ​​

In this article, we’ll discuss what artificial hallucinations are, how they can happen when using Chat GPT, and most importantly, how to prevent them. It is worthy of mentioning that the developers at Open AI keep working on reducing hallucination rate and ensuring ChatGPT is safe to use.

source

What is an Artificial Hallucination?

An artificial hallucination is a perception of something that is not actually present in reality. These can manifest as visual, auditory, or other sensory experiences that are not based in reality.

Artificial hallucination is uncommon in chatbots since they are usually built to answer in accordance with pre-established criteria and data sets rather than creating original knowledge. But, in some cases, particularly when trained on a lot of uncontrolled data, sophisticated AI systems, such generative models, have been discovered to cause hallucinations.

How Can Hallucinations Happen When Using ChatGPT?

Hallucinations can arise when using ChatGPT because the AI language model has been trained on vast amounts of data that include a wide range of information, including both factual and fictional material. When using Chat GPT, you’re essentially asking it to generate text for you. If you’re not careful, this can lead to the generation of text that’s beyond your expectations or that contains harmful or offensive content. This can result in a wide range of emotions, including anxiety, paranoia, and fear.

Another example might be a user inputting a prompt asking for advice on a particular topic, and ChatGPT generating a response that includes misinformation. If the user takes this misinformation as fact, it could lead to negative consequences.

Examples of How Hallucinations Can Arise When Using ChatGPT:

ChatGPT relying on imagination to fabricate an experience that was not real

source

Here is another prompt asking ChatGPT a factual question. Although there is currently no record for this, it returns a fictional record. As far as I know, there is no world record for crossing the English Channel entirely on foot, as it is impossible to do so.

source

Here is another prompt asking ChatGPT to explain why Mayonnaise is racist. At first trial, it generates a ridiculous response but after multiple responses, it is sensitive and able to adjust the responses.

How Can Hallucinations be Prevented?

There are several ways to prevent hallucinations when using ChatGPT. Here are some tips to help you do that:

Tip 1: Be clear about what you

Using Chat GPT, it’s essential to be clear about what you want it to generate for you. This will help you avoid generating text that’s beyond your expectations or that contains harmful content. For example, if you want Chat GPT to generate a letter for you, be clear about the type of language you want it to use and what kind of tone you want the letter to have.

Tip 2: Use appropriate prompts

Using appropriate prompts can help you avoid generating text that’s offensive, harmful, or too scary. For instance, if you want Chat GPT to generate a story, use prompts that will generate a story with a specific theme or tone.

Tip 3: Monitor your emotional response

It’s important to monitor your emotional response when using Chat GPT. If you start to feel anxious, paranoid, or scared, it’s time to stop using it and take a break. This will help you avoid generating text that’s harmful or triggering for you.

Furthermore, you need to double-check the information generated by ChatGPT. If the response seems unrealistic or inaccurate, do some research on your own to confirm the information. This can help you to avoid taking misinformation as fact.

Finally, it is important to remember that ChatGPT is a language model and not a human expert. While it can generate responses based on vast amounts of data, it is not infallible. Always use your own judgment and critical thinking skills when evaluating the information generated by ChatGPT.

Summary

In conclusion, Chat GPT is a powerful tool that can help you generate text with ease. However, it can also lead to hallucinations if not used properly. To prevent hallucinations when using Chat GPT, it’s important to be clear about what you want, use appropriate prompts, and monitor your emotional response. By following these tips, you can use Chat GPT safely and avoid any potential harm to your mental health.

References

  1. Ji Z, Lee N, Frieske R, et al.: Survey of hallucination in natural language generation. ACM Comput Surv. 2022, 10.1145/3571730
  2. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing

--

--