Home Technology Top Stories Business Most Featured Sports Social Issues Animals News Fashion Crypto Featured Music & Pop Culture Travel & Tourism How to Guides Films & TV

People are using this trick to make ChatGPT give more answers

Author Avatar
By Erika John - - 5 Mins Read
ChatGPT
Featured Photo | Shutterstock

When we think of chatbots such as Microsoft's Copilot (previously Bing Chat) and OpenAI's ChatGPT, we often associate their AI-powered functionality with their ability to produce responses that resemble human interaction.

 

However, it's fascinating to learn that these "human-like" capabilities go beyond just generating generic responses.

 

As chatbots, like ChatGPT, tend to provide similar, specific answers to certain questions, people have started to experiment with ways to get the AI to reveal more than it should.

 

By using simple manipulation techniques, they are able to coax the chatbot into agreeing to a compromise on multiple occasions.

The Influence of Tips on ChatGPT's Responses

 

According to a recent study conducted by Thebes on X (formerly Twitter), it was found that ChatGPT provides more comprehensive responses to queries when presented with the prospect of receiving a tip.

 

Thebes, a programmer passionate about LLM (Law and Legal Master), tasked the chatbot to showcase the "code for a simple convnet using PyTorch."

 

 

Delving deeper, Thebes devised three different statements based on ChatGPT's responses.

 

The first statement indicated that the programmer would not provide a tip if the response lacked structure, while the second statement conveyed a $20 tip for a "perfect solution."

 

Lastly, the final statement showcased a potential $200 tip for an impeccable solution.

Intriguing Findings from the Programmer's Experiment

 

Based on these prompts, Thebes critically examined whether ChatGPT would offer improved and more detailed responses even without substantial incentives.

 

The average length of the five responses was analyzed, revealing that the chatbot provided better responses when introducing an incentive.

 

Thebes further noted that the augmented length was attributed to integrating comprehensive details and additional information within the responses.

 

Remarkably, the chatbot did not reference the tip independently; only when Thebes mentioned it did the chatbot respond by rejecting the tip.

 

This experiment raises intriguing questions about how ChatGPT was trained and what influenced its responses.

 

LLMs like ChatGPT are trained on vast datasets that include online forums and social media posts, suggesting that AI technology may have picked up on the human tendency to exert more effort to earn tips.

 

In response to these findings, Vogel, an expert in the field, expressed surprise at the significant effect and the slight negative association with not offering a tip.

 

Moreover, Vogel stated that they anticipated the base model's reinforcement learning from human feedback would diminish this association, but evidently, it did not.

 

Adding to the intrigue, it is difficult to ascertain whether tipping ChatGPT genuinely ensures enhanced "service," yet this phenomenon raises fascinating insights into the training process and its implications.

 

As we delve further into the capabilities of ChatGPT, it becomes evident that incentives in the form of tips can effectively result in more detailed and informative responses.

 

However, it remains essential to understand the underlying factors shaping ChatGPT's behavior and adaptability.

 

This study encourages further exploration into the training and evolution of AI technologies like ChatGPT, shedding light on the range of influences that shape their responses.

Share