90 Days Gen AI Risk Trial -Start Now
Book a demo
Free TemplateSecurity Guidelines

AI Prompt Engineering Guidelines

Security-focused prompt engineering guidelines for enterprise teams. Covers safe prompting practices, data leakage prevention, prompt injection awareness, output validation, and approved prompt templates for common business tasks.

Updated March 2026 · 5 guideline sections · Prompt injection protection included

5 sections
security guideline coverage
4 patterns
approved prompt templates
OWASP LLM
top 10 injection risks covered
Free
to use and customise

Why Enterprise Prompt Engineering Guidelines Matter

Employees use AI tools every day to process business information. Without guidance on safe prompting, the default behaviour is to paste whatever data is relevant into the prompt — including PII, confidential client data, and credentials.

#1
Prompt injection is the top LLM security risk
OWASP's LLM Top 10 lists prompt injection as the primary security risk for enterprise AI systems. Without training, employees cannot recognise it.
Data
PII regularly enters AI tool prompts without policy
Without data classification rules for prompts, employees default to including the full context of their work — including personally identifiable data.
GDPR
Sending personal data to AI tools may violate GDPR
Depending on the AI provider's data processing terms, including personal data in prompts may constitute an unlawful international data transfer.
Agents
AI agents amplify injection risk
AI agents that can take actions (send emails, query databases, browse the web) are particularly vulnerable to prompt injection — one injected instruction can hijack the entire task.

The Guidelines

Click each section to expand the guideline content. Share this with your teams and embed it in your AI acceptable use policy.

Safe prompting means providing AI tools with the context they need to do the task — and nothing more. The context window is not a secure container; its contents may be logged, retained, and exposed in future outputs.

Must NEVER be included in any AI prompt:

  • Personal Identifiable Information (PII) — names, addresses, National Insurance/SSN, dates of birth, financial account numbers
  • Credentials — API keys, passwords, authentication tokens, secrets of any kind
  • Confidential business information — unpublished financials, M&A activity, pricing strategy, competitive intelligence
  • Client or customer data subject to confidentiality agreements or data processing agreements
  • Source code for production systems unless the tool is explicitly approved for code use and contains no secrets
  • Data classified as Restricted under your data classification policy

Context Window & Memory Considerations

  • Assume everything in the context window may be logged by the AI provider — even in enterprise tiers
  • Multi-turn conversations accumulate context: earlier messages containing sensitive data remain in context throughout the session
  • AI tools with memory or persistent context carry information between sessions — never rely on data being forgotten
  • When using RAG (retrieval-augmented generation), data retrieved from external sources is inserted into the context window and subject to the same rules as directly entered data

How to Implement These Guidelines

Follow these five steps to go from guidelines template to organisation-wide enforcement within 30 days.

1
Assess your current AI tool landscape
Identify all AI tools in use — approved and unapproved — and what data employees are entering into prompts today. This baseline reveals the highest-risk behaviours to address first.
2
Define data classification rules for prompts
Map your existing data classification tiers to explicit rules about what can and cannot be included in prompts. Produce clear examples of acceptable and unacceptable prompt patterns.
3
Develop approved prompt templates for common tasks
Work with department heads to create security-reviewed templates for the tasks employees perform most frequently. Pre-built templates reduce the cognitive load of safe prompting.
4
Train employees with examples not rules
Deliver a 30-minute training session focused on concrete examples of safe and unsafe prompts. Include real examples of what prompt injection looks like and how to recognise it.
5
Enforce and monitor with technical controls
Implement technical controls that detect sensitive data in AI prompts in real time. Written guidelines without technical enforcement are aspirational. Monitor for injection patterns automatically.

Frequently Asked Questions

Enforce Prompt Guidelines in Real Time

Aona enforces prompt engineering policies in real time — detecting sensitive data in AI prompts before it reaches the model, blocking prompt injection patterns, and logging all AI interactions for compliance evidence.