Prompt engineering? I already know how to Google real well.
Let’s talk about why prompt engineering is more than structuring questions. Through prompt engineering you can turn Large Language Models, like GPT4, into fully customizable functions within your app.
You have been seeing loads of people talk about all that they can do with ChatGPT, GPT4, etc. Some are creating fictional novels with just a few clever questions. Others are automating their marketing copy and seeing impressive results. You, well, you are sitting there thinking, “what’s the big deal?” Anyone could ask these types of questions. Yes, the tech is cool (and you’ll admit maybe even a little mind blowing at times 🤯), but you kind of laugh when people talk about prompt engineering being the next big career.
You have put in your 10,000 hours Googling things over the years. And yes, there is an art and science to getting the right results from a Google search. You did get pretttty, prettty good at it (in your best Larry David voice). But, are there whole career tracks dedicated to being a great Google searcher? I didn’t think so.
And therein lies the problem. A lot of the initial use cases making waves around the internet are people searching for answers. People using Bing to search for things conversationally. Other folks using AI co-pilots to get help writing their code instead of searching for answers on Stack Overflow. Yes, they are Large Language Models (LLMs) and were trained on an ungodly amount of information. So with all of that information, they probably should have some answers to your questions.
But that wasn’t really the point. Just like you didn’t go through years of education just to recite all of the information from your text books, LLMs weren’t trained on so much data just to create the world’s largest information source. It was to make sense of the world, to ‘understand’ meaning, and then to use that knowledge to do something with it, a job perhaps.
The job right now for LLMs is actually something similar to autocomplete on your phone. You provide a text prompt and it comes up with a predicted completion of that text. And through being trained on so much text in various scenarios, it’s getting pretty good at it.
Naturally, if you type a question, it is a pretty reasonable that it predicts a related answer will follow. But there are actually lots of other things you can do when understanding the nature of how LLMs work. And this is the part where people are excited about the possibilities with prompt engineering.
LLMs are in essence fully customizable APIs. You can provide it any type of input, give it instructions on what to do with that input, and then tell it exactly how to format the response. And then you can connect them into larger systems.
I’ll give a simple example. Say I wanted to create an app that analyzes customer experience surveys to tell me what is driving positive and negative reactions to a product or service. The first step is taking individual reviews, extracting keywords and then determining whether each has positive or negative connotation within the review.
You’d structure your prompt with instructions on what to do, have a spot for where the text to analyze would go, and then give specific instructions on how to format the results. Because the output would then be used downstream by more things within the app, you’d want the output to be in JSON format.
Here is an example of a prompt that would do just that:
1. Summarize the following text.
2. Extract keywords from the summary text you create (not the original text). Don't repeat keywords that mean the same thing.
3. Provide sentiment (positive, negative, or neutral) for each keyword.{TEXT_TO_ANALYZE}
Format the text in the following JSON format:
{
"summary": "summarytext",
"keywords": [
{
"keyword": "keywordtext",
"sentiment": "sentimenttext"
},
{
"keyword": "keywordtext",
"sentiment": "sentimenttext"
}
]
}The JSON for this is:
And below is this prompt in action as part of a quick UI where a review for Singapore’s Marina Bay Sands hotel is analyzed. 🏝️
So now the LLM is not just something to chat with or get information back from. Instead it is a functional part of a larger system that can complete fairly complex, targeted tasks.
Similar to other coding languages, prompt engineering is about how you define instructions, import the right information, and ultimately get results that fit within a much larger system.
We are at the beginning of how creative people will integrate and build new capabilities around LLMs. Is prompt engineering it’s own career field, though? That I am less sure about. It seems more like a new language and capability within the field of software engineering. 🙂


