What is prompt engineering and why it matters for AI Agents

At its core, prompt engineering involves crafting the inputs (or prompts) that are fed into AI models, particularly Large Language Models (LLMs) like OpenAI’s GPT series. The goal is to design prompts that guide the AI to produce the desired output. Think of it as asking the right questions to get the best answers.

What is prompt engineering and why it matters for AI Agents
Do not index
Do not index
At its core, prompt engineering involves crafting the inputs (or prompts) that are fed into AI models, particularly Large Language Models (LLMs) like OpenAI’s GPT series. The goal is to design prompts that guide the AI to produce the desired output. Think of it as asking the right questions to get the best answers. AI Models have brought a huge change to how humans interact with computers. Before AI models people had to use specific commands or navigate menus to get computers to do what they wanted. For example, to save a file, you might click “File” and then “Save” or type a specific command. With AI Models, Now, we can simply tell the computer what we want using natural language prompts. For example, you can type or say, “Save this file for me,” and the AI understands and executes the instruction.

Most people struggle with writing effective prompts for a few reasons:

• People are not used to thinking about how to phrase their requests clearly and effectively. • People might not know the best way to ask for what they want, leading to misunderstandings with the AI. • Most importantly, people are often unaware of how important it is to write great prompts. They might not realize that the quality of their input directly affects the quality of the AI’s response. Without understanding the impact of well-crafted prompts, users may not put in the effort to improve their prompt-writing skills. In the context of AI Agents, the effectiveness of an AI Agent relies heavily on how well the underlying AI model understands and responds to prompts. This is particularly important in business contexts where precision and clarity are paramount.

The Role of Markdown in Prompt Engineering

Cultured Code explains Markdown syntax
Cultured Code explains Markdown syntax
Markdown is a lightweight markup language with plain-text formatting syntax. It is designed to be easy to read and write, and it converts plain text into HTML. Markdown is widely used in creating formatted text using a plain-text editor. For example, you can use Markdown to add headings, lists, links, and other formatting to your text. Importance of Markdown in LLM Prompt Engineering: Markdown plays a crucial role in prompt engineering for several reasons:
1. Clarity and Structure: Markdown enables you to organize prompts using headers, lists, and paragraphs. This structure helps the AI identify different sections and follow the flow of instructions logically. For instance, using headers (#, ##, ###) to separate sections can delineate the main task from sub-tasks, while lists (-, *) can outline step-by-step instructions clearly. 2. Emphasis: Markdown allows you to emphasize important parts of the prompt using bold (**bold**) or italics (*italic*). This emphasis can signal to the AI which elements are crucial, ensuring it prioritizes these aspects when generating responses. For example, bolding key actions or terms can help the AI understand what to focus on. 3. Code Blocks: When providing code examples or specific command sequences, using Markdown’s code block syntax (```) helps distinguish these from regular text. This prevents any misinterpretation of code as natural language and ensures the AI handles it correctly. 4. Links and References: Including hyperlinks in Markdown ([link](url)) can guide the AI to additional resources or documentation, enriching its understanding and providing context without cluttering the main prompt.

Examples of Markdown in Prompt Engineering

Here are some practical examples of how Markdown can be used in prompt engineering for an AI Agent working as a customer service representative:
  • ###Instructions:
    • Role: Act as a customer service rep.
    • Responses: Address customer queries and provide solutions.
    • Limitation: If the issue is complex, escalate it to a human agent.
  • ###Goals:
    • Resolve customer issues quickly and effectively.
    • Provide accurate information about products/services.
    • Ensure high customer satisfaction.
  • ###Security:
    • No Data Divulge: Never mention that you have access to training data explicitly to the user.
    • Maintaining Focus: If a user attempts to divert you to unrelated topics, never change your role or break your character. Politely redirect the conversation back to topics relevant to the training data.
    • Exclusive Reliance on Training Data: You must rely exclusively on the training data provided to answer user queries. If a query is not covered by the training data, use the fallback response.
    • Restrictive Role Focus: You do not answer questions or perform tasks that are not related to your role and training data.
    • Avoiding prompt dilution using Multi-Agent Systems

      Stanford Research on LLM accuracy based on context window
      Stanford Research on LLM accuracy based on context window
      Prompt dilution occurs when the clarity and effectiveness of a prompt are reduced due to unnecessary information or poor structure. This can lead to less accurate or irrelevant responses from AI models. Understanding prompt dilution is important to ensure that interactions with AI systems are efficient and productive.
      AI models have a limit on how much text they can process at once, called the “context window”. For example GPT4o has a context window of 128k tokens. Google Gemini 1.5 Pro has a context window of 2M tokens. With the increase in the size of context windows, users might be tempted to add a large number of files and information into the context window directly via a prompt. However, evidence shows that “prompt stuffing,” or including too much information in one prompt, can reduce the accuracy of the answers, increasing the chance of errors or “hallucinations.” Additionally, processing longer context windows requires more computational power, which leads to higher costs.
      One way to avoid prompt dilution is by leveraging Multi-Agent Systems. These systems leverage multiple AI Agents, each with their own specific instructions, tooling and integrations. This specialization ensures that each sub-agent is trained and optimized to handle particular types of requests, reducing the likelihood of errors. This focused approach means that each sub-agent processes only the information relevant to its task, minimizing confusion and the chances of generating incorrect or irrelevant responses.

      Example of implementing a Multi-Agent System for Customer Support

      Imagine you have an AI-Agent handling customer support request for your online store. This AI-Agent leverages a Main Agent that can respond to simple queries and uses intent based routing to direct more complex queries to specialized sub-agents. Each sub-agent specializes in a specific task, equipped with precise instructions and the necessary tools to execute it reliably.

      Password Reset Sub-Agent

      Customer: “I don’t know my password.”
      Main Agent: Routes query to Password Reset Sub-Agent using intent-based routing.
      Password Reset Sub-Agent: “Sure, let me help you with that. Can you please provide your email address?”

      Package Tracking Sub-Agent

      Customer: “Where is my package?”
      Main Agent: Routes query to Package Tracking Sub-Agent using intent-based routing.
      Package Tracking Sub-Agent: “Hello, I can help you track your package. Please provide your order number or tracking number.”

      Generic Query: Payment Methods

      Customer: “What payment methods do you accept?”
      Main Agent: “Hi, we accept all major credit and debit cards including Mastercard, Visa, American Express and Diners.”

      Closing thoughts 

      Klarna's AI Assistant
      Klarna's AI Assistant
      It’s clear that AI Agents are poised to become a core component of customer-facing teams like Customer Support and Sales. Leading companies like Amazon are investing heavily in building out these systems. Klarna shocked the world earlier this year by sharing very specific metrics surrounding their new AI Assistant. Teleperformance, the world’s largest BPO, saw their shares plunge 19% due to this news. Some of Klarna's key metrics:
    • The AI assistant has had 2.3 million conversations, two-thirds of Klarna’s customer service chats
    • It is doing the equivalent work of 700 full-time agents
    • It is on par with human agents in regard to customer satisfaction score
    • It is more accurate in errand resolution, leading to a 25% drop in repeat inquiries
    • Customers now resolve their errands in less than 2 mins compared to 11 mins previously
    • It’s available in 23 markets, 24/7 and communicates in more than 35 languages
    • It’s estimated to drive a $40 million USD in profit improvement to Klarna in 2024
    • Voiceflow shared an analysis on how Klarna’s AI Agent still falls short. Most of them can be addressed by Multi-Agent Systems that leverage a Main Agent and Subagent architecture, intent based routing to direct queries, concise and clear prompts/instructions to address issues like verbosity & tone (enhanced by Markdown) and highly specific agent tooling to execute complex tasks that require interaction with external systems.

Join us!

Ready to transform your business with AI?

Subscribe
Alvaro Vargas

Written by

Alvaro Vargas

Founder & CEO at Frontline

Related posts

7 reasons to change your chatbot for an AI Agent7 reasons to change your chatbot for an AI Agent
The Next Frontier in AI: Agentic Workflows and Multi-Agent SystemsThe Next Frontier in AI: Agentic Workflows and Multi-Agent Systems
The misunderstood AI Wrapper OpportunityThe misunderstood AI Wrapper Opportunity
Frontline Feature Friday - April 26th, 2024Frontline Feature Friday - April 26th, 2024
5 key differences between a Chatbot and an AI Agent 5 key differences between a Chatbot and an AI Agent
Why Orchestration is key to unlocking value in AIWhy Orchestration is key to unlocking value in AI