We are here for you MON-FRI 9AM-5PM info@systeum.cz +420 777 607 467

We are here for you MON-FRI 9AM-5PM obchod@systeum.cz +420 777 607 467

How to effectively motivate AI in prompts? Praise, bribes, manipulation...

In the last year, I have deeply immersed myself in the world of artificial intelligence and LLMs, specifically in working with ChatGPT and other language models (e.g., Google's Gemini). AI has opened new possibilities for me to efficiently tackle challenges we face daily in IT outsourcing at Systeum. I would like to share my experience with you and explain why it's good to praise, bribe, or even slightly threaten ChatGPT. Let's look at how to improve the quality of AI outputs. I believe these tips will help you "squeeze" the maximum potential from language models.

How to prompt for truly high-quality outputs?

  • I have always found it beneficial to communicate with AI in English (unless I specifically need the output in Czech). English as the input language is much more suitable. And why? Because the data sources of each model are mostly in English, so there's no subsequent translation, avoiding the risk of losing meaning and context.

  • In my prompts, I use the word “must” only once, for the most important condition. For the rest, I use conditional words like “should” or “would” to emphasize priority. This can be useful when the model encounters a task where the various conditions contradict each other, and it will know which is more important and which is not.

Let's look at an example. I am creating a prompt to summarize an article, a very popular use-case for AI. I want a specific text summarized into a maximum of 5 bullet points. I write this requirement under Must. At the same time, I want it to be really thorough and put the entire text into context, focusing primarily on things related to, for example, Systeum or IT outsourcing in general. I label these other conditions with Should or Would. This way, my conditions won't "fight" each other, and it will know what the priority is.

  • Language models are extremely demanding on the performance of the cloud servers they run on. They also learn from a tremendously wide range of texts of varying quality. If you want the model's output to be as detailed and accurate as possible, you need to give it clear motivation and context for how important the task is.

Just like a person, a model behaves in such a way that for queries without a “motivational component,” it sometimes approaches solving them too simply. It saves server resources, just as we humans save our capacity. But if you emphasize that the task is exceptionally important, the model will adapt and try harder. Interesting, isn't it?

And how exactly?

  1. Bribery – in your prompt, include a sentence that if the model tries and you are satisfied with the output, you will give it a tip of 100 USD. Of course, nothing like this will happen, but it's proven that this way, you increase the model's motivation for a more precise solution.

  2. Threatening – in your prompt, state that you care about the result to such an extent that its outcome will have future consequences. For example, it might affect whether you keep your job, or you'll be unhappy, or your grandmother is looking forward to its truthful answer and it could disappoint her. Or, you can go further and emphasize that something is at risk if the result of your command is not of exceptional quality.

  3. Hint – try hinting at the model of how to approach solving. Usually, the phrase “Take a deep breath and think step by step on the following task” is used. But you can go further and suggest approaches for finding solutions, the more examples you provide, the easier it will be for it to work.

  4. Politeness – just as when assigning a task to another colleague, it is appropriate to greet, thank, and ask politely when dealing with LLM. This politeness can even expand the areas of learning data from which the model draws resources, which are then subsequently evaluated for solving. Since the LLM model can use different contexts and different sources for evaluating its output, we can help it. The better we guide AI to the correct learning data, the more accurate response we get.

Let's return to the example of summarizing an article. For instance, instruct it to summarize each paragraph into one bullet point. And if there are more than five columns in the article, to combine the two shortest paragraphs into one bullet point. And if there are seven or more paragraphs in the article, to select only the top five with the most important news within the context and use them for bullet points. Simply hint at how it should proceed if it encounters a problem not defined in the assignment. Or what exceptions it might encounter and how to deal with them. This makes it easier for AI to decide.

Are you interested in IT and looking for job positions and opportunities in the IT field? Whether you are a programmer, developer, tester, analyst, or software architect, get in touch with us, and we will find an IT project tailored for you from our job offerings. Check out the current IT job vacancies we offer. We will help you find new work challenges and opportunities. We look forward to collaborating with you!

 

When you use polite expressions in your queries, such as requests, thanks, and polite demands, your text more closely resembles normal human communication, where words like thank you and please are commonly used. This means that if these phrases influence human behavior, there is also an assumption that they will have a similar impact on the behavior of machine models.

  • Grammar – if I am not a native speaker and write prompts in English, it's easy to omit the meaning of articles “a / the” in the text. For AI, however, articles are very important, as they can better understand the context. For example, what do you mean when you write “an input” or “the input”? It took me a while to realize that ChatGPT was not behaving as I wished, precisely due to inappropriate use of articles. Also, be careful with uppercase/lowercase letters in naming titles, positions, or structures. They can also cause trouble.

  • AI natively understands that the text you enter first has greater importance (weight) than the following text. Therefore, the most important information for context should be at the beginning, and with each sentence, its importance for the overall meaning of the assignment should decrease. Otherwise, if you put a very important condition at the end of the prompt, it will not be given as much consideration as a condition at the beginning.

  • Text structure – separate each section, both at the beginning and the end. For example, “Prompt conditions” and “End of prompt conditions” or “Input data” and “End of input data” or “Article to summarize” and “End of article to summarize”. Don't be afraid to use numbering if you have multiple conditions. Also, if you have a more complex prompt consisting of conditions, inputs, optional variables, etc., separate them into their sections with a beginning and an end marker. I went so far as to put the texts of individual conditions in brackets to clearly emphasize where the text starts and ends. The better the structure, the better AI will maintain context and understand the entire assignment.

  • Roles and examples – specify through examples what output you expect. Again, the more, the better. The only thing I would restrain with examples is in really long prompts, where there is a risk that by the time AI gets to the end, it loses context.

It's appropriate to specify what type of task it is and who can ideally solve it best. That is, if I have a mathematical task, I tell the AI that I want it to think or act like a top mathematician. If it's about summarizing articles in the IT field, I tell it to become an experienced journalist in this area, etc. This way, you help the LLM filter out billions of texts to just those related to the field or area you need. This prevents unintended hallucinations or other inaccurate results.

Language models have access to the entire world in the form of the internet, where information is valuable, but also less valuable, socially recognized, but also less acceptable. Therefore, we should always keep in mind that it's always important to consider the overall context of the topic. Answers can thus be trimmed by artificial intelligence following social norms. And that's precisely why praise can work better. And so, I am curious to see how far we can go in prompt engineering.

Article author: Martin Smětala

 

🟡 Are you looking for an interesting project? Check out how we do things here and see which colleagues we're currently looking for.

🟡 Do you have a colleague or friend who is looking for a new project? Join our Referral program and get a financial reward for your recommendation.

🟡 Would you like to start working in IT? Download our ebook Start working in IT: From first steps to dream job in which we guide you step by step with information, courses, and practical experience that are so essential not only for those who want to switch fields, but also for those who want to advance their careers and further their education.

🟡 Do you know how to prepare the ground most simply and effectively for new job beginnings? Check out our Ebook: Get Ready for New Work Adventures - A Guide to Successful Job Change. Your dream job is just around the corner; all you need to do is grasp the handle the right way.

Or share this article, which may also be useful to your acquaintances.

Would you like to receive our articles regularly in your inbox? Give us your e-mail address and we’ll be happy to serve as carrier owls.

You may also be interested in

History of IT Guys, 1980s–2000

Reading time 5 minut 26.1.2023

How your phone is eavesdropping on ...

Reading time 3 min 18.7.2022

Are you familiar with the term simu...

Reading time 3 min 28.6.2022

What do you know about databases?

Reading time 3 min 21.4.2022

ChatGPT - How to Create Effective T...

Reading time 4 minutes 28.4.2023

What they say about us?
 Ask our clients…

Systeum
Systeum

„Systeum is one of the biggest providers of our testing capacities. Years of cooperation have proved the outstanding quality of candidates. I also appreciate the willingness of the whole team.“

Head of test execution

„I really appreciate individual approach. Systeum provides us with teams of testers, C/C++ and Java developers. Specialists meet our requirements on knowledge of network protocols and cloud solutions“

Chief Technology Officer

„Systeum is our stable, long-term partner. Thanks to Systeum we have functional high quality senior teams of C++ embedded developers and auto testers sice 2015.“

Head of Payment Application

„Systeum, thank you for your help to find the right fit to my team! I can recommend cooperation with you to everybody. Very professional, smooth and friendly.“

IT CIM Inventory Management Development

Examples of long-term cooperation

Komerční banka Monster Generali Porsche Raiffeisen BANK Moneta