Explore our Topics:

How prompt engineering is reducing genAI hallucinations

Generative AI's inaccurate responses can have dire consequences in healthcare; Chain-of-Feedback prompting techniques can mitigate risks.
By admin
Jul 1, 2024, 5:15 PM

It’s been the latest rage. Stories about how generative AI prompt results have gone off the rails offering a variety of inaccurate responses, bad advice, and in the most extreme, coercion of the user to have an affair with the bot.

As with tales of travel faux pas and video bloopers, these stories are entertaining, but when it comes to healthcare, they can have proverbial life and death outcomes.

Cybersecurity experts have long known that it takes a combination of human behavior and technology to battle phishing and ransomware breaches. It seems as though the same will be true for battling dangerous hallucinogenic responses from errant generative algorithms and the human prompts that cause them.

How is this done?

Its beauty is actually in its simplicity and the solution is almost too obvious. It lies in what is called Chain-of-Feedback (CoF) prompting techniques

In the most elementary terms, CoF requires the prompter to give the generative platform human guidance on how well or how poorly the results matched expectations. There is a whole new field of Prompt Engineering that specializes in this dialogue between humans and algorithms.

In this way, the original prompt would not be sacrificed, and a “funnel” would be created on the more vertical aspects of the first prompt.

The idea is that you are to explicitly give intermittent guidance to steer the generative AI in a direction that will produce the best possible answer.

The natural tendency is to add prompts that say something like “This is not what I was asking.  Try again!!”  Unfortunately, the algorithms don’t take too kindly to advice in this tone. Repeatedly asking for a re-do can result in output that is even further off course than the original.

As anthropomorphic as it sounds, the prompter must try to interact in a suggestive method as if you were asking your child to correct a behavior. Simply saying “Don’t do that again!” does not tell the child what specific action to take.

So, in the case of the GenAI, mentor it on what it did right AND specifically what was missing. Prompt Engineers are quick to point out that if presented correctly, the algorithm is surprisingly accepting of advice that will refine the output. Unfortunately, “Let’s try that again” provides no context for what to do.

Let’s stop for a minute.

To some, the question might be raised: If this technology is so smart, why am I not getting the right output from the beginning? To others, it might feel creepy that I’m advising an inanimate platform on how to think more like myself.

What’s interesting is that GenAI is very accepting of examples of the output desired. Many prompt engineers will use sample output from another GenAI platform to show the kind of output they may want on ChatGPT and what was missing.

What are some other tips that may help to optimize a prompt result?

  • Be specific but not too specific. Some very detailed prompts might actually limit the creative juices of the algorithms and reduce the likelihood of getting the detail requested.
  • Provide details on the output format you expect including tone, style, bullet points, headline types, etc. Essentially anything you would give to a freelance researcher about look and feel.
  • There is a strange quirk in GenAI about saying “don’t” in a prompt, as there is an algorithmic ambiguity that leads to misunderstanding in the output. It’s better to focus prompts on “do” than don’t.”

As mentioned earlier regarding cybersecurity, solely relying on the technology and algorithms will not have satisfying outcomes. However, orchestrating optimized human interaction through CoF with the increasingly intelligent code will provide what borders on a more sentient AI experience.

 

 

 

 


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.