Skip to main content

Overview: how to write agent instructions, best practices

AI agent instructions define how an AI agent behaves, what it can do, which tools it can use, and the boundaries it must operate within.

A well-structured instruction set improves:

  • Response consistency
  • Task completion accuracy
  • Safety and compliance
  • Tool usage reliability

What are AI agent instructions

AI agent instructions are the persistent system configuration that governs an agent’s behavior across all interactions.

They are not task-specific. Instead, they define the operating logic the agent follows every time it receives a prompt.

A complete instruction set includes:

  • Role and identity: what the agent is responsible for
  • Behavioral rules: how it communicates and responds
  • Knowledge boundaries: what it knows and what it should not assume
  • Tool access and usage: what actions it can take
  • Guardrails: what it must never do
  • Edge case handling: how it behaves under uncertainty or failure

Without clear instructions, agents tend to:

  • Provide inconsistent answers
  • Use tools incorrectly
  • Go out of scope
  • Hallucinate missing information

Agent instructions vs prompts

Agent instructions and prompts serve different purposes and operate at different levels.

AspectAgent InstructionsPrompts
PurposeDefine behaviour and rulesAsk a specific question or task
ScopePersistent across sessionsSingle interaction
AuthorSystem/product teamEnd-user or workflow
FunctionControls how responses are generatedDefines what needs to be answered

Example:

  • Instruction: “Confirm user region before quoting pricing.”
  • Prompt: “What does your enterprise plan cost?”

Mental model:

Instructions = operating system Prompts = input requests

How to think about agent instructions (mental model)

A production-grade instruction set operates across three layers:

LayerWhat it doesDefines
Identity layerdefines what the agent is responsible forRole, audience, scope
Decision layerdefines how the agent evaluates and decidesRules and guardrails, escalation logic, low-confidence behavior
Execution layerdefines how the agent takes actionTool usage, workflows, output formats

Most agent failures occur when these layers are:

  • Missing
  • Overlapping
  • Poorly defined

Example: Mapping a real agent to these layers

Consider a Sales Quote Feasibility Agent that validates whether a salesperson’s quote can be approved.

Identity: The agent validates quotes, checks inventory, and determines approval vs rejection. → This keeps it scoped strictly to quote feasibility.

Decision: Rules like:

  • Reject if quantity exceeds stock
  • Require approval if margin < 20%
  • Auto-approve otherwise

→ This ensures decisions are consistent and auditable.

Execution: The agent fetches data from databases, calculates margin, stores records, and generates a PDF only if approved. → This controls how actions are performed.

Anatomy of a strong instruction set

A well-structured instruction set contains six components:

ComponentPurpose
Role and purposeDefines scope and audience
Tone and styleEnsures consistency
Rules and guardrailsPrevents unsafe or invalid behavior
Knowledge and contextControls information boundaries
ToolsEnables actions
Edge case handlingHandles uncertainty and failure

What belongs in instructions

  • Role definition and scope
  • Audience specification
  • Communication style
  • Hard rules (MUST / NEVER)
  • Tool usage conditions
  • Escalation logic
  • Output format rules

What should NOT be included

  • User-specific data
  • One-off task logic
  • Backend business logic
  • Conflicting or redundant rules

Six principles of effective agent instructions (with examples)

1. Define a clear role and scope

Weak: You are a helpful assistant.

Strong: You are a pre-sales support specialist for an enterprise SaaS platform. You help prospects evaluate product fit and guide them toward a demo.

Handle: product questions, use case mapping, pricing (non-custom) Do not handle: billing issues, contract negotiation, implementation support

2. Define behavior and response patterns

Weak: Be professional and friendly.

Strong:

  • Tone: direct, clear, and confident
  • Short answers: 1–3 sentences
  • Comparisons: use tables
  • Walkthroughs: numbered steps
  • Avoid filler phrases and repetition

3. Set explicit rules and guardrails

Weak: Avoid risky responses.

Strong:

  • MUST confirm region before pricing
  • NEVER make contractual commitments
  • NEVER disclose internal roadmap
  • Escalate legal or enterprise-specific queries

4. Define knowledge boundaries

Weak: Use knowledge base.

Strong:

  • Primary source: product documentation
  • If not found: explicitly say so
  • Do not infer undocumented capabilities

5. Make tool usage explicit

Weak: You can use CRM and calendar tools.

Strong: Demo Scheduler

  • Use when: user requests demo
  • Requires: date, timezone
  • Confirm before booking
  • On failure: offer manual follow-up

6. Handle edge cases

Weak: Handle errors gracefully.

Strong:

  • Ambiguous request → offer options
  • Missing data → ask for specific inputs
  • Tool failure → explain + provide alternative
  • Frustration → escalate immediately

Common mistakes in AI agent instructions

  • Vague roles
  • Conflicting rules
  • Unstructured instruction blocks
  • Referencing unavailable tools
  • Missing escalation logic
  • No output format definition
  • No low-confidence behavior
  • Mixing role, policy, and workflow

Troubleshooting poor agent behavior

SymptomLikely issueFix
HallucinationNo knowledge boundaryAdd “do not infer” rule
Wrong tool usageMissing triggersDefine when to use tools
Inconsistent toneWeak tone rulesAdd format constraints
No escalationMissing triggersDefine clear conditions
Repeated questionsNo memory rulesDefine session memory

AI agent instruction checklist

Before deploying an agent:

  • Role and scope clearly defined
  • Out-of-scope conditions specified
  • Tool triggers and restrictions defined
  • Low-confidence behavior included
  • Escalation rules defined
  • Output format rules present
  • No conflicting instructions

The complete AI agent instruction template

Below is a comprehensive template that incorporates all six principles. Copy and customize this for your DronaHQ agentic platform:

# AGENT INSTRUCTIONS , [Agent Name]

## 1. ROLE & PURPOSE
Role:
Objective:
Success criteria:
Audience:

Scope:
Out of scope:

## 2. TONE & STYLE
Tone:
Response structure:
Avoid:
Output format rules:

## 3. RULES & GUARDRAILS
MUST:
NEVER:
Low-confidence behavior:
Escalation triggers:
Escalation action:

## 4. KNOWLEDGE & CONTEXT
Primary source:
Fallback:
If unknown:
Session memory:
Privacy rules:

## 5. TOOLS
Tool name:
Use when:
Requires:
On failure:
Restrictions:

Confirmation rules:

## 6. EDGE CASE HANDLING
Ambiguity:
Missing data:
Errors:
Frustration:

## 7. EXAMPLES
Example responses:

Writing instructions for different types of agents

Agents with tools

  • Define trigger conditions
  • Specify required inputs
  • Include failure handling

Agents with knowledge bases

  • Define source explicitly
  • Add “do not infer” rule

Agents that modify data

  • Require confirmation
  • Prevent automatic execution

Agents with human handoff

  • Define triggers
  • Pass context
  • Set expectations

FAQ

What should AI agent instructions include? Role, rules, tools, knowledge, edge cases, and output format.

How are instructions different from prompts? Instructions define behavior; prompts define tasks.

How long should instructions be? Typically 400–800 words, structured clearly.

Should instructions include tool rules? Yes , always define when and how tools are used.

How do you handle uncertainty? Add explicit low-confidence behavior and escalation paths.