linkedin Skip to Main Content
Just announced: We now support interviewing in spreadsheets!
Back to blog

What is Prompt Engineering?

Development

The introduction of AI tools such as ChatGPT and Bard helps increase the efficiency of tasks such as data generation, code generation and completion, text summarization, etc. How were these AI tools trained to answer better almost every question you ask them? The answer is prompt engineering.

What is prompt engineering?

Prompt engineering is a technique used to train large language models (LLMs) by supplying these models with specific prompts to improve the accuracy of responses returned. This engineering technique relies heavily on deliberate actions such as providing detailed instructions, context, and input data to the language model.

Prompting techniques

Several prompting techniques exist, such as the zero-shot, few-shot, chain-of-thought, etc. Each of these techniques has unique features that set them apart.

Zero-shot technique

The zero-shot technique responds to prompts containing no context by relying on knowledge acquired during the training of related tasks. For example, the prompt below is supplied to ChatGPT:

Classify the language into compiled or interpreted

Language: Python

Interpreted

In the prompt above, no further context on what languages are or what compiled or interpreted means in this scenario, yet the model was able to return the correct answer. LLMs are trained on large datasets, allowing them to analyze and match prompts.

However, this is only sometimes the case for every prompt, as responses from prompts powered by the zero-shot technique may need more detail. In such cases, the few-shot technique should be applied.

Few-shot technique

In this technique, the prompt engineer supplies a prompt with more context to aid the model in returning a proper response. An example of the few-shot technique is providing the model with a sample summarized paragraph before providing an essay to summarize. For example, we provided the summary for this blog post 3 Design Skills You Should Look For In Front-End Candidates to ChatGPT and asked it to summarize the article. The summarized article returned is:

The article emphasizes the importance of design skills for front-end engineers and provides recommendations on how to find engineers who can create visually appealing and user-friendly applications. It highlights three crucial skill sets to prioritize during interviews:

1. Knowledge of semantic layout: Understanding and using semantic HTML elements to enhance accessibility and facilitate code maintenance.

2. Attention to detail: Focusing on small design elements that contribute to a friendly and accessible user interface.

3. Proficiency in using design libraries: Being adept at utilizing popular design libraries like Bootstrap, Material UI, and Tailwind CSS to avoid reinventing existing features.

The essay also acknowledges that while these skills are essential, candidates without all three can still be considered based on their other strengths, adjacent skills, ability to learn quickly, and interest in improving their design skills. Overall, the essay emphasizes the significance of design in front-end development and encourages a thorough assessment of candidates' design abilities during the hiring process.Code language: JavaScript (javascript)

Chain-of-thought technique

As the name implies, in this technique, the prompt engineer supplies instructions and provides more context around a prompt, creating a pattern for the model to analyze before responding. The chain-of-thought technique requires the prompt engineer to walk through the query. An example is providing examples of arithmetic operations before asking the model for an answer to an arithmetic operation. An example of this prompting technique is:

Provide me some detailed questions containing equal parts of fix the code and theoretical questions and answers for a junior Golang developer. These questions should cover the basics of Golang. 

Use the format below: 

Question: 

Answer:

Here’s an example:
Question: What is the difference between a slice and an array?

Answer: Arrays have a fixed size and are less flexible, while slices provide dynamic sizing and are more versatile for working with collections of elements.Code language: JavaScript (javascript)

There are other techniques a prompt engineer employs for accurate or near-accurate answers from AI models.

Applications of prompt engineering

Prompt engineering finds its application in areas such as generating data and generating code and technical interviews.

The most popular use case for prompt engineering is generating data such as essays, answers to questions, etc. This application of prompt engineering plays a crucial role in the AI industry as language models can be trained to generate texts based on the inputs given. For example, GPT can write the Zen of Python:

The Zen of Python is a collection of guiding principles and aphorisms that serve as a philosophy for writing computer programs in the Python language. It encapsulates the core values and best practices that Python developers strive to adhere to. Here is the Zen of Python:

Beautiful is better than ugly.

Explicit is better than implicit.

Simple is better than complex.

Complex is better than complicated.

Flat is better than nested.

Sparse is better than dense.

Readability counts.

Special cases aren't special enough to break the rules.

Although practicality beats purity.

Errors should never pass silently.

Unless explicitly silenced.

In the face of ambiguity, refuse the temptation to guess.

There should be one—and preferably only one—obvious way to do it.

Although that way may not be obvious at first unless you're Dutch.

Now is better than never.

Although never is often better than *right* now.

If the implementation is hard to explain, it's a bad idea.

If the implementation is easy to explain, it may be a good idea.

Namespaces are one honking great idea—let's do more of those!

These guiding principles encourage Python developers to prioritize code readability, simplicity, and maintainability. The Zen of Python emphasizes the importance of clear and explicit code over clever or convoluted solutions. It promotes the idea that code should be easy to understand and follow, even at the expense of brevity. By adhering to these principles, Python developers can write code that is more efficient, easier to debug, and more enjoyable to work with.Code language: PHP (php)

ChatGPT’s response to my input adds an extra paragraph to tell us what the Zen of Python is about. 

Code generation is a close second as developers utilize the availability of LLMs to provide working code samples based on the context provided. Prompt engineering is crucial to achieving this task, as the language models require adequate context and instructions to return proper responses. For example, we can ask ChatGPT to generate a function that says my name:

package main

import "fmt"

func sayMyName(name string) {

    fmt.Println("Hello, my name is", name)

}

func main() {

    myName := "Coderpad"

    sayMyName(myName) // Prints out Hello, my name is Coderpad

}Code language: Go (go)

Shortfalls

The significant shortfall of prompt engineering is the inaccuracy of some responses from language models. We must remember that these models are trained based on datasets and inputs supplied to them. As a result, we must review the responses from AI tools like ChatGPT and Bard.

Other shortfalls of prompt engineering are the time taken to refine a prompt to return accurate results and the limited knowledge base of the language model. For example, ChatGPT models are trained up to 2021, thereby being unable to provide accurate results on questions involving events from the year 2021 upwards. Luckily, the language models quickly learn how to return results from previous training via prompt engineering techniques. For example, the clause “Using previous criteria” on new prompts instructs the language model to return results based on the criteria defined in earlier prompts.

Conclusion

Prompt engineering plays a crucial role in improving the quality of responses from AI models such as ChatGPT and Bard. Prompt engineers can train language models to generate precise and relevant responses by providing more context to prompts. This can be achieved via the various techniques mentioned in this article.

The applications of prompt engineering are diverse, with data generation and code generation being prominent examples. Prompt engineering enables the generation of essays, answers to questions, and other text-based content, providing valuable assistance in various domains. Moreover, developers can utilize prompt engineering to generate functional code snippets based on specific requirements and context.

However, it is important to acknowledge the limitations of prompt engineering. Language models are not infallible and may occasionally provide inaccurate responses. Therefore, it is crucial to review and validate the outputs generated by AI tools to ensure their reliability.