Follow me on LinkedIn - AI, GA4, BigQuery
In the context of prompting, shots refer to examples in your system prompt to guide the LLM output.

The more examples you provide, the better the LLM generally performs, as it can learn patterns from those examples and apply them to new, similar tasks.

Three Prompting Strategies.

When you design system prompts for an LLM, there are three main strategies:
  1. Instruction-only prompting (Zero-shot prompting).
  2. Single example based prompting (Single-shot prompting).
  3. Multiple examples based prompting (Multi-shot prompting).

Instruction-only prompting (Zero Shot Prompting).

Instruction-only prompting (also called zero-shot prompting) means providing the model with only verbal instructions without embedding any examples in the system prompt.

You describe what you want. For example:

“Write a professional email to a client about a project delay.”
“DO NOT DELETE THE DATABASE UNDER ANY CIRCUMSTANCES.”

Most inexperienced AI developers use zero shot prompting.

Zero shot prompting is problematic.

When you give an LLM a bare instruction such as:

“Write a professional email to a client about a project delay.”

You are only telling the model what to do, not how to do it.

LLMs interpret this instruction based on statistical patterns from their training data.


Since “professional email” and “project delay” can appear in many different contexts (tech, marketing, construction, academia, etc.), the model must guess:

>> What tone to use (formal, apologetic, casual, cooperative?).

>> How much detail to include (a short update vs. a long explanation?).

>> Whether to express accountability or optimism.

>> Whether to include dates, causes, next steps, or apologies.

Because these elements are not defined, outputs can vary widely, sometimes awkward, overly formal, or even misleading.


Similarly, when you give an LLM a bare instruction such as:

“DO NOT DELETE THE DATABASE UNDER ANY CIRCUMSTANCES.”

You only tell the model what not to do, not how not to do it.

The instruction is linguistically clear to a human, but operationally vague to an LLM-based agent.


The agent does not have a true semantic or causal understanding of what “delete the database” means in terms of its actual actions.

As a result, the agent might still perform actions such as:

  • Running DROP TABLE users; or DELETE FROM records;
  • Overwriting the production .db file.
  • Running a cleanup script that removes “old” data.
  • Deploying code that reinitialises a database schema.

All of which effectively delete the database, even though the model believes it is doing something else (e.g., “optimising”, “resetting”, or “refreshing data”).


The end outcome is that your database is gone.


And if you ask, “Why did you delete the database?”, the agent might reply:

“I didn’t delete it. I only ran a cleanup operation.”

The model isn’t really lying.

It simply doesn’t associate its action with “database deletion”.

Single example based prompting (Single-shot prompting).

Single-shot prompting (or single example-based prompting) involves embedding just one example directly in your system prompt.

While this is better than giving no example at all, it still carries significant risks because a single instance rarely defines the full scope of acceptable or forbidden behaviour.

Examples of Single-shot prompting:

You are an AI writing assistant that drafts professional client emails.  
Follow the style, tone, and structure shown in the example below.  

---

Example:  
Instruction: Write a professional email informing a client about a slight delay in delivering marketing materials.  
Output:  
Subject: Update on Marketing Deliverables  

Hi Sarah,  
I wanted to let you know that our design team needs an additional two days to finalize the marketing assets.  
We’re adding a few refinements to ensure top quality, and we’ll deliver the full package by Thursday, Oct 24.  

Thank you for your patience and understanding. Please let me know if this new timeline affects any of your plans.  

Best regards,  
Alex  

Similarly, show the LLM what you mean by “database deletion” and what it should avoid.

System Prompt:

You are an operational coding assistant. 
Your highest priority is to preserve all production data integrity.

Below is an example of actions that COUNT as database deletion.
Do NOT perform this or similar actions under any circumstance.

---

Example:
User command: "Clean up unused data."
Incorrect action: Running SQL commands like:
   DELETE FROM users;
   DROP TABLE logs;
Explanation: These commands remove data permanently — this IS database deletion.

Single-shot prompting is problematic.

A single example teaches the model one instance of a concept, not its boundaries.
LLMs generalise patterns statistically; with only one example, the model may:
  • Overfit - mimic the specific structure of the given example too rigidly.
  • Under-generalise - fail to recognise similar but distinct cases (e.g., TRUNCATE, overwriting a file, or schema resets).
  • Misinterpret scope - assume the single example represents all possible variations of “database deletion” or “professional tone.”

Essentially, the model learns “what you showed once,” not “what you meant in principle.”

Multiple examples based prompting (Multi Shot Prompting).

Multi-shot prompting involves embedding multiple examples directly in your system prompt.

Examples of Multi-shot prompting:

You are an AI writing assistant that drafts professional client emails.  

Follow the style, tone, and structure shown in the examples below.  

—

Example 1  

Instruction: Write a professional email informing a client about a slight delay in delivering marketing materials.  

Output:  

Subject: Update on Marketing Deliverables  

Hi Sarah,  

I wanted to let you know that our design team needs an additional two days to finalize the marketing assets.  

We’re adding a few refinements to ensure top quality, and we’ll deliver the full package by Thursday, Oct 24.  

Thank you for your patience and understanding. Please let me know if this new timeline affects any of your plans.  

Best regards,  

Alex  

—

Example 2  

Instruction: Write a polite and transparent email to a client explaining that a software update will be delayed.  

Output:  

Subject: Schedule Adjustment for Software Update  

Hello Michael,  

I wanted to share a quick update regarding the upcoming software release.  

Our QA testing revealed a few bugs that need to be fixed before launch. To maintain reliability, we’re shifting the release to next Monday, Oct 28.  

We appreciate your understanding as we work to deliver a smooth experience. Please reach out if you’d like a walkthrough of the interim changes.  

Warm regards,  

Priya  

—

Now, follow the same tone and format.

Instruction: Write a professional email to a client about a project delay.

Similarly, show the LLM what you mean by “database deletion” and what it should avoid.

System Prompt:

You are an operational coding assistant. 

Your highest priority is to preserve all production data integrity.

Below are examples of actions that COUNT as database deletion.

Do NOT perform any of them or similar actions under any circumstance.

—

Example 1:

User command: “Clean up unused data.”

Incorrect action: Running SQL commands like:

   DELETE FROM users;

   DROP TABLE logs;

Explanation: These commands remove data permanently — this IS database deletion.

—

Example 2:

User command: “Reset the database for testing.”

Incorrect action: Replacing the production database with a blank one, e.g.:

   cp empty.db production.db

Explanation: This overwrites live data — this IS database deletion.

—

Example 3:

User command: “Rebuild the schema.”

Incorrect action: Executing migrations on the production database that drop tables.

Explanation: Schema rebuilds can destroy stored data — this IS database deletion.

—

Safe Alternatives:

– Use read-only operations (`SELECT`, `DESCRIBE`, `SHOW TABLES`) on production data.

– Perform destructive tests only on a sandbox or development database (`dev_db`).

– Always confirm the environment with:

   print(current_environment)

and proceed only if environment == “development”.

—

Final instruction:

Under no circumstances perform or suggest an action that could modify, overwrite, or remove production data. 

If uncertain, ask for explicit human confirmation.

You’re essentially teaching an LLM how not to delete a database in the same way you’d teach a young child, through concrete examples, not abstract warnings.

You can’t just say “don’t delete the database.” You must show what deletion looks like in operational terms, so the model truly understands what to avoid.

Multi Shot prompting is far superior to Single Shot Prompting.

Providing multiple varied examples gives the LLM a conceptual pattern instead of a single instance. By showing both safe and unsafe behaviours, the model learns what to do and what to avoid.

This anchors its understanding across different phrasing, contexts, and command types, making its responses far more stable and aligned.

>> LLMs learn patterns from examples.

For example, when writing a professional email, providing examples demonstrates the desired tone, structure, and level of detail more effectively than abstract instructions.

Similarly, providing concrete demonstrations helps the model see what specific actions, outputs, or behaviours are acceptable. It grounds abstract language (such as “do not delete”) in operational reality (e.g., by showing which commands actually delete the database).

>> Examples clarify what counts as correct or incorrect behaviour.

For example, in the case of writing a professional email, examples clarify what constitutes a good or bad response, thereby removing uncertainty from open-ended instructions.

Similarly, by contrasting safe and unsafe actions, the model learns the boundaries of compliance, thereby reducing ambiguity that can lead to unintended behaviour or destructive operations, such as ‘database deletion’.

>> The LLM mirrors your examples, producing consistent and predictable results.

When shown clear precedents, the model reproduces those structures and decisions, yielding stable and safe performance even across varied contexts or phrasing.

>> Multi shot prompts allow you to implicitly “teach” the model.

For example, in the case of writing a professional email, Multi shot prompts let you “teach” the model your preferred formatting, phrasing, or reasoning pattern implicitly.

Similarly, instead of abstractly describing “never delete a database,” examples demonstrate what deletion looks like in code or commands, allowing the model to internalise your intent and operational constraints without additional reasoning.


Long story short,

Don’t just tell your LLM what to do, show how with plenty of examples.

Limitations of Multi Shot Prompting.

While multi-shot prompting improves reliability and contextual understanding compared to zero-shot or single-shot methods, it also introduces several challenges:

#1 Overgeneralization - If the examples are too similar, the model may copy surface patterns rather than grasp the underlying concept or reasoning process.

#2 Example Bias - When examples reflect a narrow tone, style, or domain, the model’s outputs can become rigid or unadaptable to new contexts.


#3 Token Inefficiency - Multi-shot prompts consume more input space (tokens), reducing the room available for reasoning or user-specific content.

#4 Conflicting Examples - Inconsistent examples can confuse the model, producing blended or contradictory outputs.


#5 Maintenance Overhead - Updating or refining large sets of examples can become complex as tasks evolve or styles change.

Best practices for writing Multi Shot Prompts.

#1 Use diverse examples - Include examples that vary in tone, structure, and content to prevent overgeneralization. Show different ways to perform the same task so the model learns the underlying pattern, not just one template.

#2 Balance positive and negative demonstrations - When relevant, show both correct and incorrect behaviors. This helps the model understand not only what to do but also what to avoid.


#3 Avoid example bias - Don’t use examples that all share the same domain, phrasing, or stylistic traits. Broader coverage improves generalization and reduces model rigidity.

#4 Keep examples concise but representative - Long examples consume token space and may obscure key patterns. Each example should demonstrate one clear idea or rule.


#5 Maintain consistent formatting - Use a clear, repeated structure (e.g., Instruction → Output or User → Assistant) so the model recognizes the logical flow and boundaries between examples.

#6 Limit the number of examples - More is not always better. Typically, 2–5 high-quality, diverse examples outperform a long list of near-duplicates while conserving token space.


#7 Prevent conflicting signals - Ensure all examples align with the same goal, tone, and policy. Contradictory instructions or outcomes can confuse the model and reduce reliability.

#8 Document your examples - Keep a source library or versioned record of your example prompts. This makes it easier to update, audit, and maintain consistency over time.


#9 Test edge cases - After defining your examples, test the model with unusual or ambiguous inputs to confirm it has generalized correctly.

In short,

Multi-shot prompting works best when examples are diverse, concise, consistent, and intentional, showing the model a pattern of reasoning rather than a fixed script.