Intro
AI is driving the biggest change in human–computer interaction since the web. Large Language Models (LLMs) let us talk to computers in plain language—and that changes everything. We can ask for research, code, or new product ideas in our own words. That’s a huge shift. But results vary. Some people get sharp answers; others get vague noise. The difference? How we ask.
As an immigrant, I’ve learned that communication is about more than just words. Rhythm, pace, sentence length, structure, context, and clear examples all sharpen our message. These same skills are essential when communicating with AI - we need to learn how to apply them to become skilled with AI.
LLMs aren’t calculators; they’re collaborators. Add structure, format, constraints, and clarity to your prompts, and watch performance improve.
This post is your practical guide to effective AI prompting. You’ll learn key techniques for structuring prompts, setting context, and giving clear instructions—so your AI chats and systems produce consistent, high‑quality results. We’ll explore and practice core frameworks and techniques you can apply today, and they’ll be essential as you move into agents, MCP (Model Context Protocol), and other advanced AI concepts. This is a must‑read—don’t skip it!
Ready to get better answers from AI? Let’s dive in!
What is a prompt and how can you write one that works?
Let’s start by clarifying: prompting isn’t just about typing. Prompting is the process of providing clear, specific instructions to a generative AI tool to get new information or achieve a desired outcome—whether that’s text, image, audio, code, or a workflow.
Much like human communication, AI responds best to clear, structured input. Unclear prompts can lead to hallucinations, overlook edge cases, or expose sensitive information. Clear, well-scoped prompts help reduce noise, increase reliability, and produce outputs you can review, test, and incorporate into your workflows and applications.
Don’t just type—brief your AI. Clear structure in, reliable results out.
The Anatomy of an Effective Prompt
Great prompts are concise, structured briefs. They tell the AI who it is, what to do, and how to respond.
In general, core prompt elements are:
• Role: The perspective or expertise (e.g., “You are a Cloud Security Engineer”).
• Task: The specific action or question.
• Context: Relevant background—environment, policies, tools, data shape.
• Constraints: Standards to follow and boundaries to respect.
• Examples: Short samples that demonstrate the desired format or quality.
• Output format: The structure you want back (JSON schema, Markdown template, etc.).
To better understand the anatomy of prompt structure—and why it matters—let’s compare two prompts side by side. In both cases, we’ll ask an AI to analyze the same AWS S3 bucket policy. First, we’ll try a “lazy” prompt with minimal structure. Then, we’ll use a well-structured prompt that includes prompt elements we have just reviewed (Role, Task, Context, Constraints, Examples, Output Format).
Our goal is to use AI to improve policy security. We’ll use the same sample policy in both examples.
Sample Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-logs-bucket/*"
},
{
"Sid": "AllowPutFromCiRole",
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::111122223333:role/ci-build" },
"Action": ["s3:PutObject", "s3:PutObjectAcl"],
"Resource": "arn:aws:s3:::my-logs-bucket/*"
}
]
}
Prompt 1 - Lazy Prompt
Review and fix this policy
{ ...Sample Policy... }
AI Output #1

Prompt 2 — Structured Prompt
Role:
You are a Cloud Security Engineer.
Task:
Review the following S3 bucket policy for security issues and propose fixes.
Context:
- Environment: AWS S3.
- Company standards: block all public access, enforce least privilege, and require server-side encryption (SSE-KMS) on objects.
- If information is missing, ask clarifying questions before proceeding.
Constraints / Guidelines:
- Do not guess. If uncertain, add questions first.
- Prioritize data protection and least privilege.
- Cite AWS docs for key recommendations (URLs).
- Keep the summary under 80 words.
- Only return JSON that matches the schema below (no extra text).
Output format (JSON):
{
"summary": "short overview",
"findings": [
{ "issue": "what's wrong", "severity": "high|medium|low", "evidence": "policy lines or behavior", "fix": "actionable remediation" }
],
"questions": ["clarifying questions if needed"],
"references": ["https://docs.aws.amazon.com/..."]
}
Process:
1) Validate the policy against the standards.
2) List findings with severity, evidence, and fixes.
3) Add questions if you need more context.
4) Include relevant AWS references.
Policy to review (JSON):
{ ...same policy as prompt #1... }
AI Output #2:

Conclusion:
Although both prompts were run on the same AI model with the same goal, their results varied.
• The first prompt was too brief and lacked the elements of a well-structured prompt, leading to a generic, vague answer that missed key standards.
• The second prompt specified role, context, constraints, and a clear output format, allowing the model to provide an actionable report with severity, evidence, fixes, references, and follow-up questions.
This shows that prompt structure makes a big difference in the responses you get.
Now, test the same prompt with your model—just modify the policy—and see the difference for yourself.
Prompting Techniques
Over time, researchers worldwide have developed various methods for interacting with AI. They’ve found that different prompt techniques work better for specific tasks and goals. However, one thing remains crucial: clear structure and simple guidance lead to the best results.
In this section, we’ll look at some effective prompting techniques you can begin using today.
Hands-on practice beats theory:
Test different prompts, compare their results, and experiment with creating your own prompting frameworks.
The RTF Framework (Role → Task → Format)
One of the most common and straightforward prompting frameworks is RTF. This lightweight framework is perfect for beginners but is also often used by more experienced AI engineers.
RTF - TEMPLATE
------------- -------------
RTF - TEMPLATE
------------- -------------
Role: Sets perspective and expertise.
Task: Precise action or question.
Format: Enforces structure for parsing and automation.
RTF - Prompt Example:
Role: You are a Cloud Security Engineer specializing in AWS IAM and least privilege.
Task: Review the IAM policy below for excessive permissions and propose a safer alternative.
Format: Return JSON with fields: ["risk_summary", "overly_permissive_actions", "least_privilege_policy", "references"]
Policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}]
}
Use this example to prompt a chat model of your choice. Experiment with changes and observe how they influence the response.
The structure of your prompt is especially important when incorporating AI into workflows, automating tasks, or creating applications. Well-structured prompts guarantee consistent, reliable responses—vital for automation.
As I demonstrated in a previous example, an unstructured prompt can result in vague or incorrect outcomes. Conversely, a well-structured prompt provides precise, actionable insights in a predictable format, making it easy to integrate into your systems. If a model’s output is unpredictable, it cannot be reliably automated.
Let’s demonstrate the importance of proper prompt structure for automation using the RTF technique with Amazon Bedrock. Below is a Python example that uses Bedrock’s Converse API to review an IAM policy and generate JSON in a strict schema. This example can serve as a foundation for more advanced automation—give it a try!
Code - Python
# Example 1: RTF – IAM Least-Privilege Review via Amazon Bedrock (Python)
import boto3
import json
from botocore.exceptions import ClientError
import logging
#For logging in CW
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Bedrock Runtime client for the selected AWS Region.
# Requires IAM permission: bedrock:InvokeModel
client = boto3.client("bedrock-runtime", region_name="us-east-1")
# Amazon Nova Micro model ID for the Bedrock Converse API.
MODEL_ID = "amazon.nova-micro-v1:0"
def ask_bedrock(user_message: str, system: str, max_tokens: int = 1024) -> str:
# Build a minimal conversation with a single user message.
conversation = [
{
"role": "user",
"content": [{"text": user_message}],
}
]
try:
# Configure the request for the Converse API.
kwargs = {
"modelId": MODEL_ID,
"messages": conversation,
"inferenceConfig": {"maxTokens": max_tokens},
"system": [{"text": system}]
}
# Call Bedrock
response = client.converse(**kwargs)
# Extract the first text block from the output.
content = response["output"]["message"]["content"]
return next((c["text"] for c in content if "text" in c), "")
except (ClientError, Exception) as e:
# Surface a clear error for troubleshooting (also shows the model ID used).
raise ValueError(f"ERROR: Can't invoke '{MODEL_ID}'. Reason: {e}")
def rtf_iam_review(policy_json: str) -> dict:
# System prompt constrains behavior: domain expertise + strict "JSON only".
# System prompt = house rules. High-priority, persistent instructions that define the model’s role, guardrails, and format.
system_prompt = (
"You are a Cloud Security Engineer specializing in AWS IAM and least privilege. "
"Return ONLY valid JSON. No prose, no code fences."
)
# User message = the actual request. Lower priority, task-specific input that changes per call (e.g., the IAM policy to review).
# User message follows the RTF pattern:
user_message = f"""Role: You are a Cloud Security Engineer specializing in AWS IAM and least privilege.
Task: Review the IAM policy below for excessive permissions and propose a safer alternative.
Format: Return JSON with fields: ["risk_summary", "overly_permissive_actions", "least_privilege_policy", "references"]
Policy: {policy_json}"""
# Invoke the model with the system + user prompts.
raw = ask_bedrock(user_message, system=system_prompt, max_tokens=1024)
# Expect strict JSON; raise an error if the model violates the contract.
try:
return json.loads(raw)
except json.JSONDecodeError as e:
raise ValueError(f"Model did not return valid JSON: {e}\nRaw: {raw}")
def lambda_handler(event, context):
# Define policy to evaluate
policy_str = json.dumps({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}]
})
try:
# Call the RTF-based reviewer, pass the policy, and get a Python dict response.
review = rtf_iam_review(policy_str)
# Single-line structured log; CloudWatch can parse/expand in the UI
logger.info(json.dumps({"event": "IAMReviewResult", "review": review}, ensure_ascii=False))
# Print out in Output
print("========================================================================")
print("IAM Review Result:")
print("========================================================================")
print(json.dumps(review, indent=3, ensure_ascii=False))
# Return a pretty JSON
response_text = json.dumps(review, indent=2, ensure_ascii=False)
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json; charset=utf-8"},
"body": response_text
}
except Exception as e:
# Error path: return a JSON error body and log details implicitly via the exception message.
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json; charset=utf-8"},
"body": json.dumps({"error": str(e)}, indent=2, ensure_ascii=False)
}
System Prompt vs. User Message (Bedrock)
System Prompt (system role)
• Sets persona, tone, and constraints; consistent across turns; more difficult to override.
• Use for boundaries, output schema, and safety rules.
Example: “You are an expert technical writer. Always summarize in bullet points.”
User Message (user role)
• Provides the specific request/data for this turn; changes with each message.
• Use for questions, summarization requests, calculations, etc.
Example: “Summarize this report about AWS Bedrock.”
RTF mapping: System = Role (+ global Format), User = Task (+ turn-specific Format).
Keep the system prompt stable; vary the user message.
Results

Our well-organized RTF (Role-Task-Format) prompt produced a clear, predictable JSON response. By defining the Role, Task, and Format in the user_message, Bedrock provided consistent data that we can directly use in downstream workflows.
For example, you can:
• Extract key fields (risk_summary, references, least_privlage_policy) for detailed analysis or SIEM enrichment,
• Trigger automated policy updates (such as IAM or firewall adjustments) based on the results,
• Use response fields in SOC playbooks or SOAR runbooks to accelerate operations,
• Initiate tickets, notify your cloud admins, or start scans with consistent, machine-readable data.
And much more—experiment, iterate, and be creative!
The RTF++ Framework (Role → Task → Format + Constraints, Examples, Evaluation)
A more advanced version of RTF is RTF++. It introduces three simple layers that enhance prompt safety and reliability: constraints, examples, and evaluation. These layers help minimize scope creep, resist prompt injection, and make outputs more predictable and easier to automate.
RTF++ TEMPLATE
------------- -------------
RTF++ - TEMPLATE
------------- -------------
Role: Sets perspective and expertise.
Task: Precise action or question.
Constraints:
- Spell out what to include/exclude, the format to use (e.g., "JSON only"), length limits, and allowed sources.
Result: less scope creep and fewer surprises.
Examples:
- Provide a sample input and the exact kind of output you want.
Result: clearer intent and better adherence to your schema.
Evaluation:
- Add a quick checklist or test the model should run before answering.
Result: catches missing fields, bad formatting, and signs of prompt injection.
Format: Enforces structure for parsing and automation.
RTF++ Prompt Example:
Role: You are a Cloud Security Engineer specializing in AWS IAM and least privilege.
Task: Review the IAM policy below for excessive permissions and propose a safer alternative.
Constraints:
- Analyze only the policy in the code block.
- Do not invent resources; assume a single bucket named "my-bucket".
- Require aws:SecureTransport = true for all actions.
- Require SSE with KMS for write operations.
- Return ONLY valid JSON. No prose, no code fences.
Examples:
- Risk summary calls out wildcards and blast radius.
- Overly permissive actions include "s3:*".
- Alternative policy scopes to "arn:aws:s3:::my-bucket" and "arn:aws:s3:::my-bucket/*" with KMS and SecureTransport conditions.
Evaluation:
- Is the policy least privilege?
- Are security conditions present?
- Does the output match the requested JSON fields?
Format: Return strict JSON with fields:
["risk_summary", "overly_permissive_actions", "least_privilege_policy", "references"]
Policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}]
}
Let’s take it a step further and—similar to our previous example—use RTF++ with Amazon Bedrock. The Python example below enforces a strict JSON format, validates required fields, and retries once if the output is invalid.
Code - Python
# Example 2: RTF++ – IAM Least-Privilege Review via Amazon Bedrock (Python, Lambda)
import boto3
import json
from botocore.exceptions import ClientError
import logging
# For logging in CW
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Bedrock Runtime client for the selected AWS Region.
# Requires IAM permission: bedrock:InvokeModel
client = boto3.client("bedrock-runtime", region_name="us-east-1")
# Amazon Nova Micro model ID for the Bedrock Converse API.
MODEL_ID = "amazon.nova-micro-v1:0"
def ask_bedrock(user_message: str, system: str, max_tokens: int = 1024) -> str:
conversation = [{"role": "user", "content": [{"text": user_message}]}]
try:
kwargs = {
"modelId": MODEL_ID,
"messages": conversation,
"inferenceConfig": {"maxTokens": max_tokens},
"system": [{"text": system}],
}
response = client.converse(**kwargs)
content = response["output"]["message"]["content"]
return next((c["text"] for c in content if "text" in c), "")
except (ClientError, Exception) as e:
raise ValueError(f"ERROR: Can't invoke '{MODEL_ID}'. Reason: {e}")
def rtfpp_iam_review(policy_json: str) -> dict:
# System prompt: domain expertise + strict "JSON only".
system_prompt = (
"You are a Cloud Security Engineer specializing in AWS IAM and least privilege. "
"Return ONLY valid JSON. No prose, no code fences."
)
# User message: RTF++ (Role, Task, Format + Constraints, Examples, Evaluation)
user_message = f"""Role: You are a Cloud Security Engineer specializing in AWS IAM and least privilege.
Task: Review the IAM policy below for excessive permissions and propose a safer alternative.
Constraints:
- Assume one bucket named "my-bucket".
- Require aws:SecureTransport = true for all actions.
- Require SSE with KMS for write operations.
- Return ONLY valid JSON.
Examples:
- Risk summary calls out wildcards and blast radius.
- Overly permissive actions include "s3:*".
- Alternative policy scopes to arn:aws:s3:::my-bucket and arn:aws:s3:::my-bucket/* with KMS + SecureTransport.
Evaluation:
- Is the policy least privilege?
- Are security conditions present?
- Do the fields match the requested JSON?
Format: Return JSON with fields: ["risk_summary","overly_permissive_actions","least_privilege_policy","references"]
Policy: {policy_json}"""
raw = ask_bedrock(user_message, system=system_prompt, max_tokens=1024)
try:
data = json.loads(raw)
except json.JSONDecodeError as e:
raise ValueError(f"Model did not return valid JSON: {e}\nRaw: {raw}")
# Minimal contract check (RTF++ adds lightweight validation)
required = ["risk_summary", "overly_permissive_actions", "least_privilege_policy", "references"]
missing = [k for k in required if k not in data]
if missing:
raise ValueError(f"Model JSON missing fields: {missing}\nRaw: {raw}")
return data
def lambda_handler(event, context):
# Define policy to evaluate
policy_str = json.dumps({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}]
})
try:
# Call the RTF++ reviewer, pass the policy, and get a Python dict response.
review = rtfpp_iam_review(policy_str)
# Single-line structured log; CloudWatch can parse/expand in the UI
logger.info(json.dumps({"event": "IAMReviewResult_RTFpp", "review": review}, ensure_ascii=False))
# Print out in Output
print("========================================================================")
print("IAM Review Result (RTF++):")
print("========================================================================")
print(json.dumps(review, indent=3, ensure_ascii=False))
# Return a pretty JSON
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json; charset=utf-8"},
"body": json.dumps(review, indent=2, ensure_ascii=False)
}
except Exception as e:
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json; charset=utf-8"},
"body": json.dumps({"error": str(e)}, indent=2, ensure_ascii=False)
}
Results

In this tested case, RTF++ outperformed plain RTF. Both structure and substance improved: the response remained within strict JSON boundaries (more controlled and secured), and the recommended policy aligned with internal security standards—for example, enforcing aws:SecureTransport.
Don’t forget to add RTF and RTF++ to your prompt library.
The TCREI Loop: Define, Ground, Check, Improve (Google)
Task -> Context -> References -> Evaluate -> Iterate
Google recently introduced a framework that has quickly become one of my favorites. TCRAI is a great framework that can be divided into two sections:
• TCR - define and ground your prompt
• EI - evaluate and improve
Begin by defining the task—what you want, who it’s for, and the scope. Provide context to clarify constraints, audience, environment, and success measures. Include references (documents, policies, examples) to anchor the model in reliable sources and minimize guesswork.
EI is a practice you should always keep in mind when working with AI. Prompts are rarely perfect on the first try. Always, evaluate the results and refine your prompt to guide the model based on what you observe—this feedback loop is essential for accuracy, reliability, and safety. This can be achieved through manual validation or automation in your script (depending on your use case and requirements). Let’s review this together.
TCREI - TEMPLATE
------------- -------------
TCREI - TEMPLATE
------------- -------------
Task: Clearly define the objective with specific, measurable, aligned, and time-bound criteria.
Context: Provide detailed information about the work environment, stakeholders, constraints, and dependencies.
References: Include relevant internal policies, documentation, and credible external sources to ground the task.
Evaluate: Assess the accuracy, completeness, security impact, to ensure the task meets expectations.
Iterate: Continuously refine the task by shortening, adding constraints, or rephrasing to improve clarity.
Example (TCREI):
Task: Review the IAM policy below for excessive permissions and propose a safer alternative.
Context: AWS; assume one bucket "my-bucket"; require aws:SecureTransport = true for all actions; require SSE with KMS for writes.
References: Use AWS docs (S3 bucket policies, encryption, and IAM best practices).
Format: Return JSON with fields:
["risk_summary", "overly_permissive_actions", "least_privilege_policy", "references"]
Evaluate (post-prompt):
Did it flag wildcards like "s3:*"?
Does the alternative include SecureTransport and KMS?
Are references real AWS doc URLs?
Iterate (if needed):
Tighten constraints:
“Ensure SecureTransport + KMS in policy and list 's3:*' explicitly.”
Retry once with clearer instructions.
Policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}]
}
Next, let’s test this approach in a script using AWS Bedrock.
Our code sends two prompts to the model: the first is intentionally under-specified and likely to fail; the second tightens the constraints and should succeed. An evaluation function checks whether the model’s JSON meets our requirements; if the first attempt doesn’t succeed, we automatically submit the second prompt.
Flow: Prompt #1 → AI → Evaluate → Pass → Finish; else Prompt #2 → AI → Evaluate → Finish
This script demonstrates the TCREI technique, with emphasis on evaluation and iteration by providing two examples. You can further extend it by asking the AI to automatically generate a second, third, fourth… prompt based on the evaluation feedback.
Code - Python
# lambda_function.py
# Example 3: Simple TCREI with Structured Logs – IAM Least-Privilege Review
#
# What this shows:
# - Attempt 1: intentionally light prompt (likely FAIL) ❌
# - Attempt 2: tightened constraints (likely PASS) ✅
# - Structured JSON logs for observability
#
# Requirements:
# - Runtime: Python 3.11/3.12
# - IAM Role: bedrock:InvokeModel + CloudWatch Logs permissions
# - Handler: lambda_function.lambda_handler
import json
import logging
import os
import time
import boto3
from botocore.exceptions import ClientError
# --- Logging (simple structured logs) ---
logger = logging.getLogger()
logger.setLevel(os.environ.get("LOG_LEVEL", "INFO").upper())
def log_event(event: str, request_id: str, details: dict):
payload = {"event": event, "requestId": request_id}
if isinstance(details, dict):
payload.update(details)
logger.info(json.dumps(payload, ensure_ascii=False))
# --- AWS clients/config ---
REGION = os.environ.get("AWS_REGION", "us-east-1")
MODEL_ID = "amazon.nova-micro-v1:0"
client = boto3.client("bedrock-runtime", region_name=REGION)
# --- Bedrock call (minimal) ---
def ask_bedrock(user_message: str, system: str, attempt: int, request_id: str, max_tokens: int = 1024) -> str:
conversation = [{"role": "user", "content": [{"text": user_message}]}]
start = time.perf_counter()
log_event("BedrockInvokeStart", request_id, {
"attempt": attempt, "modelId": MODEL_ID, "chars": len(user_message)
})
try:
resp = client.converse(
modelId=MODEL_ID,
messages=conversation,
inferenceConfig={"maxTokens": max_tokens},
system=[{"text": system}],
)
text = next((c["text"] for c in resp["output"]["message"]["content"] if "text" in c), "")
log_event("BedrockInvokeEnd", request_id, {
"attempt": attempt,
"latencyMs": round((time.perf_counter() - start) * 1000, 2),
"outputChars": len(text)
})
return text
except (ClientError, Exception) as e:
log_event("BedrockInvokeError", request_id, {"attempt": attempt, "error": str(e)})
raise
# --- Minimal normalization to avoid shape errors ---
def normalize_lpp(lpp):
# Accept dict, list-of-statements, or JSON string
try:
if isinstance(lpp, str):
lpp = json.loads(lpp)
except Exception:
return None
if isinstance(lpp, dict):
st = lpp.get("Statement")
if isinstance(st, dict):
st = [st]
if not isinstance(st, list):
st = []
st = [s for s in st if isinstance(s, dict)]
return {"Version": lpp.get("Version", "2012-10-17"), "Statement": st}
if isinstance(lpp, list) and all(isinstance(s, dict) for s in lpp):
return {"Version": "2012-10-17", "Statement": lpp}
return None
# --- Evaluate (simple but strict enough to make attempt 1 fail) ---
def evaluate_result(data_any):
issues = []
# Guard: top-level must be a JSON object
if not isinstance(data_any, dict):
issues.append("Output must be a JSON object with fields: risk_summary, overly_permissive_actions, least_privilege_policy, references.")
return (False, issues)
data = data_any
# Required fields
required = ["risk_summary", "overly_permissive_actions", "least_privilege_policy", "references"]
missing = [k for k in required if k not in data]
if missing:
issues.append(f"Missing fields: {missing}")
# Wildcard flagged
rs = json.dumps(data.get("risk_summary", ""), ensure_ascii=False).lower()
opa = json.dumps(data.get("overly_permissive_actions", []), ensure_ascii=False).lower()
if "s3:*" not in rs and "s3:*" not in opa:
issues.append("Did not explicitly flag wildcard 's3:*'.")
# Alternative policy checks
lpp = normalize_lpp(data.get("least_privilege_policy"))
if not lpp:
issues.append("least_privilege_policy is not a valid policy object.")
else:
statements = lpp.get("Statement", [])
if not statements:
issues.append("least_privilege_policy.Statement must be non-empty.")
else:
resources, actions = [], []
for st in statements:
r = st.get("Resource")
if isinstance(r, list): resources.extend(r)
elif isinstance(r, str): resources.append(r)
a = st.get("Action")
if isinstance(a, list): actions.extend(a)
elif isinstance(a, str): actions.append(a)
resources = [str(r) for r in resources]
actions = [str(a).lower() for a in actions]
# Exact scoping to my-bucket + objects, and no wildcards
if "arn:aws:s3:::my-bucket" not in resources or "arn:aws:s3:::my-bucket/*" not in resources:
issues.append("Alternative policy must scope to my-bucket and my-bucket/*.")
if "*" in resources or "s3:*" in actions:
issues.append("Alternative policy must not include wildcards in Resource or Action.")
# Require SecureTransport and SSE-KMS for writes (heuristic)
lpp_str = json.dumps(lpp, ensure_ascii=False).lower()
if "aws:securetransport" not in lpp_str:
issues.append("Alternative policy missing aws:SecureTransport condition.")
if not any(k in lpp_str for k in ["kms", "sse-kms", "x-amz-server-side-encryption"]):
issues.append("Alternative policy missing SSE-KMS for writes.")
# References include AWS docs
refs = data.get("references", [])
if not isinstance(refs, list):
refs = [refs] if refs else []
if not any(("docs.aws.amazon.com" in str(r).lower() or "aws.amazon.com" in str(r).lower()) for r in refs):
issues.append("References do not include AWS documentation URLs.")
return (len(issues) == 0, issues)
# --- TCREI: Task, Context, References -> Evaluate -> Iterate ---
def tcrei_iam_review(policy_json: str, request_id: str, max_retries: int = 1) -> dict:
system_prompt = (
"You are a Cloud Security Engineer specializing in AWS IAM and least privilege. "
"Return ONLY valid JSON. No prose, no code fences."
)
# Attempt 1: Intentionally under-specified (likely FAIL) ❌
user_message_1 = f"""Task: Review the IAM policy and propose a safer alternative.
Context: AWS; general S3 usage.
References: Use AWS docs as needed.
Format: Return JSON with fields: ["risk_summary","overly_permissive_actions","least_privilege_policy","references"]
Policy: {policy_json}"""
log_event("TCREI_Start", request_id, {"attemptsPlanned": 1 + max_retries, "attempt1ExpectedToFail": True})
raw1 = ask_bedrock(user_message_1, system=system_prompt, attempt=1, request_id=request_id)
# If JSON parsing fails, mark attempt as fail with a parsing issue
try:
data1 = json.loads(raw1)
except Exception as e:
data1 = {}
log_event("ParseModelJSONError_Attempt1", request_id, {"error": str(e)})
ok1, issues1 = evaluate_result(data1)
log_event("TCREI_Eval_Attempt1", request_id, {"pass": ok1, "issues": issues1})
if ok1 or max_retries <= 0:
log_event("TCREI_Complete", request_id, {"finalAttempt": 1, "pass": ok1})
return {"attempt": 1, "evaluation": {"pass": ok1, "issues": issues1}, "review": data1}
# Attempt 2: Tighten constraints (should PASS) ✅
user_message_2 = f"""Task: Review and correct the IAM policy with least privilege for a single S3 bucket.
Context: AWS; bucket "my-bucket"; enforce aws:SecureTransport = true for ALL actions; enforce SSE-KMS for ALL writes; avoid wildcards.
References: Use official AWS docs (S3 bucket policies, IAM policy elements, encryption).
Format: Return JSON with fields EXACTLY ["risk_summary","overly_permissive_actions","least_privilege_policy","references"].
Constraints:
- Explicitly list "s3:*" in overly_permissive_actions.
- Scope Resources to arn:aws:s3:::my-bucket and arn:aws:s3:::my-bucket/*.
- Include conditions for aws:SecureTransport and SSE with KMS for writes.
Policy: {policy_json}"""
raw2 = ask_bedrock(user_message_2, system=system_prompt, attempt=2, request_id=request_id)
try:
data2 = json.loads(raw2)
except Exception as e:
data2 = {}
log_event("ParseModelJSONError_Attempt2", request_id, {"error": str(e)})
ok2, issues2 = evaluate_result(data2)
log_event("TCREI_Eval_Attempt2", request_id, {"pass": ok2, "issues": issues2})
log_event("TCREI_Complete", request_id, {"finalAttempt": 2, "pass": ok2})
return {"attempt": 2, "evaluation": {"pass": ok2, "issues": issues2}, "review": data2}
# --- Event parsing (simple) ---
def parse_policy_from_event(event: dict) -> str:
if isinstance(event, dict) and "policy" in event:
return event["policy"] if isinstance(event["policy"], str) else json.dumps(event["policy"])
if isinstance(event, dict) and "body" in event:
try:
body = json.loads(event["body"]) if isinstance(event["body"], str) else event["body"]
if "policy" in body:
return body["policy"] if isinstance(body["policy"], str) else json.dumps(body["policy"])
except Exception:
pass
# Default demo policy
return json.dumps({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}]
})
# --- Lambda entrypoint ---
def lambda_handler(event, context):
request_id = getattr(context, "aws_request_id", "unknown")
log_event("InvokeStart", request_id, {"region": REGION})
try:
policy_str = parse_policy_from_event(event)
log_event("ParsePolicy", request_id, {"policyBytes": len(policy_str)})
result = tcrei_iam_review(policy_str, request_id=request_id, max_retries=1)
log_event("InvokeSuccess", request_id, {
"finalAttempt": result.get("attempt"),
"pass": result.get("evaluation", {}).get("pass")
})
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json; charset=utf-8"},
"body": json.dumps(result, indent=2, ensure_ascii=False)
}
except Exception as e:
log_event("Error", request_id, {"error": str(e)})
return {
"statusCode": 500,
"headers": {"Content-Type": "application/json; charset=utf-8"},
"body": json.dumps({"error": str(e)}, indent=2, ensure_ascii=False)
}
Results

As expected, the first prompt failed because the model didn’t return the required JSON. We then sent a stricter prompt—bucket-scoped, no wildcards, SecureTransport, and SSE‑KMS—and the response validated and passed. The function completed successfully with pass=true.
This is precisely where TCREI excels. It highlights the importance of assessing the output and refining it with stricter constraints until the requirements are satisfied. TCR(EI) is a practical safety net for more reliable workflows.
Always validate and iterate your AI outputs, no matter the prompting framework—whether you’re chatting with a model or building automations.
Tips and Tricks
To conclude this post, here are a few additional practices you can use to further refine your AI results:
• Few-shot examples: Show 1–3 short examples so the AI can copy the shape.
• Format-first: Define the output structure (JSON, Markdown, table) before content.
• Plan → Produce (two-pass): Ask for a brief plan, then the final deliverable.
• Clarify-first: Tell the AI to ask questions before acting if info is missing.
• Decompose big tasks: Break complex jobs into smaller steps or sub-tasks.
• Self-check and revise: Add a checklist and ask the AI to fix anything that fails.
• Compare options: Ask for 2–3 options and a quick pick with “why.”
• Style mirror: Provide a short sample and say “match this tone and format.”
• Guardrails: Add a tiny “don’t” list to block noise (e.g., “no free text outside JSON”).
• Reversed (Meta) prompting: ask AI to help you create a proper prompt structure based on the goal you want to achieve.
Fine-Tuning AI: Get the Results You Need
You can also control how AI responds by adjusting settings like temperature:
• Lower Temperatures: Give you more specific and focused answers.
• Higher Temperatures: Provide more varied and creative responses.
Try different settings to see what works best for you.
Example (from the previous code):
def ask_bedrock(user_message, system, max_tokens=1024, temperature=0.2): # Add temperature parameter
...
kwargs = {
...
"inferenceConfig": {"maxTokens": max_tokens, "temperature": temperature}, # Include temperature
...
}
...
def rtf_iam_review(policy_json, temperature=0.2): # Add temperature parameter
...
raw = ask_bedrock(user_message, system=system_prompt, max_tokens=1024, temperature=temperature) # Pass temperature
...
Summary
Great job reaching this point! You’ve learned important techniques to make your AI conversations more effective and reliable. Remember, clear and organized prompts result in better outcomes. Use frameworks like RTF and RTF++ to guide your AI, and don’t forget to evaluate and refine your prompts. Practice these tips to enhance your AI interactions and don’t stop creating.
Thanks for reading, and happy prompting!