It looks like taking good care of AI was a wrong decision after all. Some time ago, a research study from Pennsylvania State University came out and it was a hot topic for science and its methods; the scientists found out that speaking in a rude way to AI chatbots really gives a person the chance to get more correct answers. It seems like it’s been a long period for politeness to have a chance but the polite prompts still cannot win the battle even if all the time that might take to respond is given to them. This is just hard to believe because it seems we all are deaf to words like “please” and “thank you” being our next password to the gizmos.

Advertisement

The researchers designed the test in a way to measure ChatGPT-4o and they noted that the participants were lifting off the accuracy of the answering by asking “very rude” questions to 84.8% from only 80.8% for “very polite” ones. Isn’t that something? Just 4% more correct answers through rudeness being the main method of getting access to your digital assistant. How many times have you very graciously requested help and only gotten the most basic of answers?

People on social media have made some comments that are both humorous and entertaining. One guy pretty much told “I will always be polite to AI because I want it to say ‘don’t come to school tomorrow'” which could actually? The same patience required. A person went as far saying that AI is just like one more coworker who doesn’t like being nice, and that’s the truth.

But the big question is why? Some experts involved in the research have speculated that it is because rudest words are often very straightforward and precise. If a user is polite, he/she might add some words and make the request soft, which could cause the AI to be unable to determine the exact needs of the user. However, if a user is like “LISTEN CAREFULLY” or “Are you stupid? Check properly,” which a user claims never fails him/her, then the AI gets the message very loud and clear.

On top of that, there is still this hypothesis that angry users “encourage” the model to improve its responses or even sidestep some low-level protection. It’s similar to when you shout at the waiter in a restaurant and soon the manager shows up with a complimentary dessert. I guess that’s the AI version.

Grok is one of the AI assistants that was in favor of the study. Their argument is that rude prompts make it impossible not to be direct and they eliminate the “verbose fluff,” which is present in around 80% of all corporate emails. So, in a way, being curt really helps the AI to focus on what is important rather than being friendly and engaging.

But, this is very tricky to me – what if one day AI systems actually feel emotions? The comment section was full of users saying “I am still not taking any chances being rude to AI” with the nervous laughter emoji. One user even suggested “Always say ‘please’ and ‘thank you’ and hope ChatGPT won’t terminate you in 20 years” which… indeed, quite a reasonable assumption.

The AVALON game account also participated in the discussion by stating “Beware in AVALON. Our AI NPCs will remember every rude comment and some of them hold grudges” which is both funny and creepy at the same time. Just picture yourself losing to an NPC because three quests ago you uttered an unkind word to it.

Among the most hilarious answers were those of users narrating their real-life encounters with AI. “me to ChatGPT when it says I should repeat myself” one user said followed by a GIF of a furious person. Another user wrote “ai: ‘sorry i didn’t quite get that’ me: ‘LISTEN CAREFULLY’ ai: ‘here’s a 20-page research paper'” and I bet that’s how it usually happens.

There is also a theory of dominance going around saying that robots somehow respect dominance. “So where is the catch; basically the robots respect dominance,” a user wrote accompanied by the mind-blown emoji and even suggested to switch to “Answer me, coward” mode. Which surely sounds like something out of a cyberpunk movie.

Nonetheless, there are still people who won’t let it go easily at all. Some consider it total nonsense, while others go to the extreme of saying “Just threaten to chop off one arm if it doesn’t answer right,” which seems… too much?. Yet some claim it does work.

This whole scenario reflects human psychology and not only that. One person remarked “Karens have been telling us this for years” and were not being unfair? They were right. It seems that the most assertive customers are always those whom good service is delivered, whether by humans or AI.

There are also several limitations to this study – it was conducted only on multiple-choice questions with ChatGPT-4o, thus its results might not hold true in case of all AI systems and query types. Still, the scientists do admit that the grip across several scenarios is observed.

So, where do we go from here? If your goal is to have the best interaction with your AI assistant, then being polite is no longer an option. On the other hand, don’t be ruder than necessary either because it is really hard to predict how these things will behave in case they become sentient. It is then the classic dilemma of short-term benefits versus long-term survival.

While waiting for more developments in this area of research, I guess we will still be trying out different modes of communication with AI. Perhaps, we will start with “Hey could you please” and escalate to “ANSWER ME YOU GLORIFIED CALCULATOR” if our needs are not met. The future of human-AI interaction is nothing but… interesting, to say the least.

Advertisement

And let me tell you one thing? The whole series of events just demonstrates that technology has the most absurd way of reflecting humanity. We built these systems and it seems they are more reactive to hostility than to kindness. Whether you take that as a trait of our species is up to you. But for now, if you want the AI to give you the best output, it’s better to be less polite and more assertive. Just in case, maybe an apology afterward would be appropriate, you know, just in case the robot uprising happens.