Supercharge Your ChatGPT: 5 Levels to God-Tier Prompting

One of the toughest parts of using language models is making sure they do the tasks you want them to do just right.

Victor Timi
Level Up Coding

--

Thumbnail Image

In the field of prompt engineering, the method we use to instruct language models significantly influences the quality of their responses. Despite their training, language models still lack a complete understanding of human interaction. Here’s an example: I requested ChatGPT to provide a positive statement about ‘Victoria,’ but it instead described a beautiful location in Australia. My intention was for it to offer praise directly for the name ‘Victoria.’

This challenge highlights the need to improve the precision of our prompts to ensure chatbots comprehend our instructions accurately. Scholars have conducted extensive research to optimize language models, ensuring that the responses they generate align seamlessly with our intended goals.

The article covers the following methods: Zero-shot Prompting, Few-shot Prompting, Chain of Thought Prompting, Generative Knowledge Prompting (Iterative Prompting), Tree of Thought Prompting, Conclusion.

Zero-shot Prompting

Zero-shot prompting is likely the most common way users interact with ChatGPT. It’s when you pose questions without providing specific examples to guide the chatbot in formulating responses. For example:

“Write a YouTube script for the Top 10 places to visit for a Christmas holiday.”
“Send a pun-filled happy birthday message to my friend Alex.”
“Explain PHP in a humorous manner.” and more.

One downside of Zero-shot prompting is that, without offering additional context, the chatbot might not be able to produce a perfect script, and in such situations, you’re most likely to get a dumb response.

Few-shot Prompting

If I were to request ChatGPT to generate a YouTube script for “The top 10 Must-Visit Destinations for a Christmas Holiday,” without providing context or examples, it’s likely to produce a less accurate response due to the absence of specific guidance. While you will receive a response, it may not be as effective as when you incorporate examples or context into your prompt.

That’s why the TCR (Task, Context, Response Format) formula has been a valid approach for using Few-shot prompting. It goes like this:

Example Prompt:

[Task] Create a captivating YouTube video script that highlights the top 10 enchanting Christmas destinations worldwide.

[Context] The script should emphasize unique Christmas events, heartwarming Christmas Carols, and interactions with friendly locals in each location, all set against the backdrop of stunning snowy landscapes.

[Response Format] Ensure that your response is structured as a top 10 list, maintaining a friendly, inviting, and festive tone throughout the script to resonate with the Christmas spirit.

Few-shot prompting not only conveys the task you wish your Language Model to undertake but also supplies context, examples, and even a response format, guiding the Language Model to provide you with precisely the response you desire.

Here’s a breakdown of the formula:

  • Task: Begin your prompt with an action verb like “Highlight,” “Explain,” “Generate,” “Provide,” and so on.
  • Context: Include sample scenarios, examples, guides, and patterns to help ChatGPT better understand the type of response you’re aiming for.
  • Response Format: Specify a Tone (the manner in which you want ChatGPT to express itself), and Format (precisely how you want the response to be structured).

Chain of Thought Prompting

While Few-shot Prompting is generally recommended for achieving highly accurate responses from ChatGPT, it may not be the ideal choice for every prompting scenario.

Illustrative Case Study

A compelling case study is found in the paper “LARGE LANGUAGE MODELS ARE ZERO SHOT REASONERS” by the University of Tokyo. In this study, GPT was presented with the following question:

A juggler can juggle 16 balls. Half of the balls are Gulf balls and half of the Gulf balls are blue. How many Blue Gulf balls are there?

In its response, GPT provided the answer: “8.” Which is plainly incorrect when you break it down step by step.

However, by chaining the thought of your LLMs or ChatGPT in this case, that is, adding the prefix “Let’s think Step-by-Step” before the main prompt, which is “Let’s think step-by-step, A juggler can juggle 16 balls…”, we’ve asked the chatbot to document its thinking. So, the LLM or GPT response here was:

“There are 16 balls in total. Half of the balls are golf balls. That means that there are 8 golf balls. Half of the golf balls are blue. That means that there are 4 blue golf balls.” (Correct!)

It’s worth noting that newer models like GPT-3.5 and beyond can engage in mathematical reasoning without the need for a “let’s think step by step” prefix. However, the use of chain of thought prompting remains a valuable tool for several reasons.

  • It encourages the model to provide a more detailed and transparent response, explaining its actions and reasoning process. This aids in better understanding how the model arrived at its answer, a critical aspect of Explainable AI (XAI).
  • Chain of thought prompting can enhance the quality of the model’s response by prompting it to consider alternative perspectives. By combining Few-shot prompting with Chain of Thought, you can elevate your GPT’s responses to an exceptional level. Providing the model with additional context, equations to follow, and guidance helps it better comprehend the task at hand, resulting in impressive responses.

Generative Knowledge Prompting (Iterative Prompting)

Generative Knowledge Prompting is a methodology that involves guiding the model with step-by-step demonstrations or instructive input and directing it to produce responses tailored to specific problems or tasks.

For Instance:

When requesting your language model to design a dashboard website, it can certainly generate one. However, for the best results, guide the model incrementally, shaping the dashboard step by step from its initial framework. This approach will help you in achieving the comprehensive and highly desired dashboard you have in mind.

For example, you can start with a prompt like this:

‘Incorporate ReactJS to design a responsive grid layout with three columns and two rows. Ensure that each grid cell is of equal size. Describe the necessary code or steps to create this grid layout.’

Once the grid layout is created, you can further utilize the knowledge generated to issue additional prompts. Such as, prompting the model to add black borders to each cell, insert the text ‘Like, Comment, and Subscribe to my Email Newsletter’ into the last cell of the grid, and so on, until the dashboard is completely created to your desired taste.

The whole process of Generative Knowledge Prompting revolves around two things:

  • Generating Knowledge: Using few-shot demonstration to generate question related knowledge statements from a language model.
  • Knowledge Integration: Once the knowledge statements have been generated, the model incorporates this newfound knowledge into its responses when addressing the original question. This integration allows the model to offer more informed and contextually relevant answers.

In essence, Generative Knowledge Prompting follows a strategy of simplifying the task initially and then leveraging this simplified context to address a more complex task.

Tree of Thought Prompting

The concept of the ‘tree of thought’ operates in a manner similar to human problem-solving. Humans tend to approach problems by considering various complex potential solutions. They then evaluate these solutions one by one, ultimately selecting the most suitable one.

Basically, the approach entails instructing the Language Model to initiate a brainstorming process for generating multiple potential solutions while considering key factors influencing each solution’s success. Subsequently, you’ll evaluate these solutions, eliminate the least favorable ones, and implement the winning solution.

THE TREE OF THOUGHT METHOD:

Crafting your Persona

Before you begin, it’s important to establish a persona for the chatbot. This step enables the chatbot to reason more effectively and deliver responses that are better suited to your needs.

EXAMPLE PROMPT:

Scenario: How to tell your boss to offer you a promotion
[THE PERSONA]
Prompt:
Act as a Psychologist and HR Expert. Your task is to assist the user with difficult problems. Acknowledge this by answering “YES”.
(The choice of a Psychologist and an HR Expert is based on their expertise in helping me persuade my boss to consider me for a promotion.)

After the Language Model acknowledges your prompt, you can progress through the following phases: Brainstorming, Evaluation, Expansion, and finally, the Decision phase.

The Tree of Thought process encompasses 4 key PHASES:

  • BRAINSTORMING PHASE: The primary objective of this phase is to generate a diverse set of solutions for a given problem.
    Prompt Format: “I’m facing an issue related to [Describe the Problem Area]. Could you brainstorm [Specify the desired number, e.g., 3] distinct solutions? Please take into consideration various factors, such as [Include your relevant factors].”

Prompt: Hello, I’m facing an issue — how can I approach my boss about a promotion? Here’s some background: I’m a 25-year-old male, somewhat shy, and don’t like conflict. I’ve been a Junior Developer for three years, and during that time, I’ve honed my skills in various tech stacks like Python and Java, improved my problem-solving abilities, and become highly teachable. I believe that a promotion would not only recognize my growth but also result in increased compensation, given the expanded responsibilities. I’m attached to the company’s culture and don’t want to seek opportunities elsewhere. Could you please brainstorm three distinctive solutions for this situation? Consider factors such as my background, timing, preparation, professionalism, and value proposition.

  • EVALUATION PHASE: Once the model has responded to your Brainstorm prompt, you should evaluate the options it has provided. The objective here is to impartially assess the feasibility and potential success of each option.

Prompt: Evaluate the potential of each of the three proposed solutions. Take into account their advantages and disadvantages, the initial effort required, implementation complexity, potential challenges, and expected outcomes. Based on these factors, assign a probability of success and a confidence level to each option.

  • EXPANSION PHASE: In this phase, you’ll engage in a thorough exploration of each idea, refining and mapping out its practical implementation in a real-world context.

Prompt: For each solution, delve deeper into the thought process. Create potential scenarios, outline strategies for implementation, identify necessary partnerships or resources, and propose solutions for potential obstacles. Additionally, consider any possible or unforeseen outcomes and develop strategies for addressing them.

  • DECISION PHASE: Finally, after conducting the exploration and relying on the evaluations and scenarios generated, the AI will assign a ranking to each solution based on their potential.

Prompt: Based on the evaluations and scenarios, please rank the solutions in order of their potential. Additionally, provide a rationale for each ranking and offer any concluding remarks or considerations for each solution.

Conclusion

While this article has delved into some effective prompting methods for language models like ChatGPT, it’s essential to acknowledge that there are numerous other advanced techniques, including ReAct (Reasoning and Action), RAG (Retrieval Augmented Generation), Self Consistency, and more, which hold significant promise for further enhancing model capabilities. These methods can be particularly potent when integrated into applications, especially when combined with programming languages like Python or JavaScript and supported by prompt engineering frameworks like Langchain.

Looking to delve deeper into the world of Prompt Engineering and explore a variety of Prompting Methods? Check out these valuable resources:

https://learnprompting.org/docs/intro
https://www.promptingguide.ai/
https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf
https://arxiv.org/pdf/2110.08387.pdf
https://arxiv.org/pdf/2210.03629.pdf

See You Soon…

--

--

Full-Stack Web Developer — Next.Js, React.Js, Typescript, Javascript, MongoDB, SQL, VectorDB, CSS, Radix, Shadcn-UI, Tailwind — victortimidev@gmail.com