We are here for you MON-FRI 9AM-5PM info@systeum.cz +420 777 607 467

We are here for you MON-FRI 9AM-5PM obchod@systeum.cz +420 777 607 467

How to effectively motivate AI in prompts? Praise, bribes, manipulation...

In the last year, I have deeply immersed myself in the world of artificial intelligence and LLMs, specifically in working with ChatGPT and other language models (e.g., Google's Gemini). AI has opened new possibilities for me to efficiently tackle challenges we face daily in IT outsourcing at Systeum. I would like to share my experience with you and explain why it's good to praise, bribe, or even slightly threaten ChatGPT. Let's look at how to improve the quality of AI outputs. I believe these tips will help you "squeeze" the maximum potential from language models.

How to prompt for truly high-quality outputs?

  • I have always found it beneficial to communicate with AI in English (unless I specifically need the output in Czech). English as the input language is much more suitable. And why? Because the data sources of each model are mostly in English, so there's no subsequent translation, avoiding the risk of losing meaning and context.

  • In my prompts, I use the word “must” only once, for the most important condition. For the rest, I use conditional words like “should” or “would” to emphasize priority. This can be useful when the model encounters a task where the various conditions contradict each other, and it will know which is more important and which is not.

Let's look at an example. I am creating a prompt to summarize an article, a very popular use-case for AI. I want a specific text summarized into a maximum of 5 bullet points. I write this requirement under Must. At the same time, I want it to be really thorough and put the entire text into context, focusing primarily on things related to, for example, Systeum or IT outsourcing in general. I label these other conditions with Should or Would. This way, my conditions won't "fight" each other, and it will know what the priority is.

  • Language models are extremely demanding on the performance of the cloud servers they run on. They also learn from a tremendously wide range of texts of varying quality. If you want the model's output to be as detailed and accurate as possible, you need to give it clear motivation and context for how important the task is.

Just like a person, a model behaves in such a way that for queries without a “motivational component,” it sometimes approaches solving them too simply. It saves server resources, just as we humans save our capacity. But if you emphasize that the task is exceptionally important, the model will adapt and try harder. Interesting, isn't it?

And how exactly?

  1. Bribery – in your prompt, include a sentence that if the model tries and you are satisfied with the output, you will give it a tip of 100 USD. Of course, nothing like this will happen, but it's proven that this way, you increase the model's motivation for a more precise solution.

  2. Threatening – in your prompt, state that you care about the result to such an extent that its outcome will have future consequences. For example, it might affect whether you keep your job, or you'll be unhappy, or your grandmother is looking forward to its truthful answer and it could disappoint her. Or, you can go further and emphasize that something is at risk if the result of your command is not of exceptional quality.

  3. Hint – try hinting at the model of how to approach solving. Usually, the phrase “Take a deep breath and think step by step on the following task” is used. But you can go further and suggest approaches for finding solutions, the more examples you provide, the easier it will be for it to work.

  4. Politeness – just as when assigning a task to another colleague, it is appropriate to greet, thank, and ask politely when dealing with LLM. This politeness can even expand the areas of learning data from which the model draws resources, which are then subsequently evaluated for solving. Since the LLM model can use different contexts and different sources for evaluating its output, we can help it. The better we guide AI to the correct learning data, the more accurate response we get.

Let's return to the example of summarizing an article. For instance, instruct it to summarize each paragraph into one bullet point. And if there are more than five columns in the article, to combine the two shortest paragraphs into one bullet point. And if there are seven or more paragraphs in the article, to select only the top five with the most important news within the context and use them for bullet points. Simply hint at how it should proceed if it encounters a problem not defined in the assignment. Or what exceptions it might encounter and how to deal with them. This makes it easier for AI to decide.