Evaluating single-turn agent interactions follows a sample that almost all groups perceive effectively. You present an enter, accumulate the output, and choose the end result. Frameworks like Strands Analysis SDK make this course of systematic by means of evaluators that assess helpfulness, faithfulness, and device utilization. In a earlier weblog publish, we lined construct complete analysis suites for AI brokers utilizing these capabilities. Nevertheless, manufacturing conversations hardly ever cease at one flip.
Actual customers interact in exchanges that unfold over a number of turns. They ask follow-up questions when solutions are incomplete, change course when new info surfaces, and specific frustration when their wants go unmet. A journey assistant that handles “E-book me a flight to Paris” effectively in isolation would possibly battle when the identical consumer follows up with “Truly, can we take a look at trains as an alternative?” or “What about inns close to the Eiffel Tower?” Testing these dynamic patterns requires greater than static take a look at circumstances with mounted inputs and anticipated outputs.
The core problem is scale as a result of you possibly can’t manually conduct a whole lot of multi-turn conversations each time your agent modifications, and writing scripted dialog flows locks you into predetermined paths that miss how actual customers behave. What analysis groups want is a solution to generate reasonable, goal-driven customers programmatically and allow them to converse naturally with an agent throughout a number of turns. On this publish, we discover how ActorSimulator in Strands Evaluations SDK addresses this problem with structured consumer simulation that integrates into your analysis pipeline.
Why multi-turn analysis is basically tougher
Single-turn analysis has an easy construction. The enter is understood forward of time, the output is self-contained, and the analysis context is proscribed to that single trade. Multi-turn conversations break each one in every of these assumptions.
In a multi-turn interplay, every message will depend on all the things that got here earlier than it. The consumer’s second query is formed by how the agent answered the primary. A partial reply attracts a follow-up about no matter was unnoticed, a misunderstanding leads the consumer to restate their unique request, and a shocking suggestion can ship the dialog in a brand new course.
These adaptive behaviors create dialog paths that may’t be predicted at test-design time. A static dataset of I/O pairs, irrespective of how giant, can’t seize this dynamic high quality as a result of the “right” subsequent consumer message will depend on what the agent simply stated.
Guide testing covers this hole in concept however fails in follow. Testers can conduct reasonable multi-turn conversations, however doing so for each state of affairs, throughout each persona sort, after each agent change is just not sustainable. Because the agent’s capabilities develop, the variety of dialog paths grows combinatorially, effectively past what groups can discover manually.
Some groups flip to immediate engineering as a shortcut, asking a big language mannequin (LLM) to “act like a consumer” throughout testing. With out structured persona definitions and express aim monitoring, these approaches produce inconsistent outcomes. The simulated consumer’s habits drifts between runs, making it troublesome to match evaluations over time or establish real regressions versus random variation. A structured strategy to consumer simulation can bridge this hole by combining the realism of human dialog with the repeatability and scale of automated testing.
What makes a very good simulated consumer
Simulation-based testing is effectively established in different engineering disciplines. Flight simulators take a look at pilot responses to situations that might be harmful or unattainable to breed in the actual world. Recreation engines use AI-driven brokers to discover thousands and thousands of participant habits paths earlier than launch. The identical precept applies to conversational AI. You create a managed setting the place reasonable actors work together together with your system underneath circumstances you outline, then measure the outcomes.
For AI agent analysis, a helpful simulated consumer begins with a constant persona. One which behaves like a technical knowledgeable in a single flip and a confused novice within the subsequent produces unreliable analysis information. Consistency means to take care of the identical communication model, experience degree, and persona traits by means of each trade, simply as an actual individual would.
Equally necessary is goal-driven habits. Actual customers come to an agent with one thing they wish to accomplish. They persist till they obtain it, alter their strategy when one thing is just not working, and acknowledge when their aim has been met. With out express objectives, a simulated consumer tends to both finish conversations too early or proceed asking questions indefinitely, neither of which displays actual utilization.
The simulated consumer should additionally reply adaptively to what the agent says, not observe a predetermined script. When the agent asks a clarifying query, the actor ought to reply it in character. If the response is incomplete, the actor follows up on no matter was unnoticed somewhat than transferring on. If the dialog drifts off subject, the actor steers it again towards the unique aim. These adaptive behaviors make simulated conversations invaluable as analysis information as a result of they train the identical dialog dynamics your agent faces in manufacturing.
Constructing persona consistency, aim monitoring, and adaptive habits right into a simulation framework is what differentiates structured consumer simulation from ad-hoc prompting. ActorSimulator in Strands Evals is designed round precisely these rules.
How ActorSimulator works
ActorSimulator implements these simulation qualities by means of a system that wraps a Strands Agent configured to behave as a sensible consumer persona. The method begins with profile era. Given a take a look at case containing an enter question and an non-compulsory process description, ActorSimulator makes use of an LLM to create a whole actor profile. A take a look at case with enter “I need assistance reserving a flight to Paris” and process description “Full flight reserving underneath funds” would possibly produce a budget-conscious traveler with beginner-level expertise and an off-the-cuff communication model. Profile era provides every simulated dialog a definite, constant character.
With the profile established, the simulator manages the dialog flip by flip. It maintains the complete dialog historical past and generates every response in context, conserving the simulated consumer’s habits aligned with their profile and objectives all through. When your agent addresses solely a part of the request, the simulated consumer naturally follows up on the gaps. A clarifying query out of your agent will get a response that stays in keeping with the persona. The dialog feels natural as a result of each response displays each the actor’s persona and all the things stated up to now.
Purpose monitoring runs alongside the dialog. ActorSimulator features a built-in aim completion evaluation device that the simulated consumer can invoke to guage whether or not their unique goal has been met. When the aim is happy or the simulated consumer determines that the agent can’t full their request, the simulator emits a cease sign and the dialog ends. If the utmost flip depend is reached earlier than the aim is met, the dialog additionally stops. This provides you a sign that the agent won’t be resolving consumer wants effectively. This mechanism makes certain conversations have a pure endpoint somewhat than operating indefinitely or chopping off arbitrarily.
Every response from the simulated consumer additionally contains structured reasoning alongside the message textual content. You may examine why the simulated consumer selected to say what they stated, whether or not they had been following up on lacking info, expressing confusion, or redirecting the dialog. This transparency is efficacious throughout analysis improvement as a result of you possibly can see the reasoning behind every flip, making it extra easy to hint the place conversations succeed or go off observe.
Getting began with ActorSimulator
To get began, you will have to put in the Strands Analysis SDK utilizing: pip set up strands-agents-evals. For a step-by-step setup, you possibly can discuss with our documentation or our earlier weblog for extra particulars. Placing these ideas into follow requires minimal code. You outline a take a look at case with an enter question and a process description that captures the consumer’s aim. ActorSimulator handles profile era, dialog administration, and aim monitoring routinely.
The next instance evaluates a journey assistant agent by means of a multi-turn simulated dialog.
from strands import Agent
from strands_evals import ActorSimulator, Case, Experiment
# Outline your take a look at case
case = Case(
enter="I wish to plan a visit to Tokyo with resort and actions",
metadata={"task_description": "Full journey bundle organized"}
)
# Create the agent you wish to consider
agent = Agent(
system_prompt="You're a useful journey assistant.",
callback_handler=None
)
# Create consumer simulator from take a look at case
user_sim = ActorSimulator.from_case_for_user_simulator(
case=case,
max_turns=5
)
# Run the multi-turn dialog
user_message = case.enter
conversation_history = []
whereas user_sim.has_next():
# Agent responds to consumer
agent_response = agent(user_message)
agent_message = str(agent_response)
conversation_history.append({
"position": "assistant",
"content material": agent_message
})
# Simulator generates subsequent consumer message
user_result = user_sim.act(agent_message)
user_message = str(user_result.structured_output.message)
conversation_history.append({
"position": "consumer",
"content material": user_message
})
print(f"Dialog accomplished in {len(conversation_history) // 2} turns")
The dialog loop continues till has_next() returns False, which occurs when the simulated consumer’s objectives are met or simulated consumer determines that the agent can’t full the request or the utmost flip restrict is reached. The ensuing conversation_history incorporates the complete multi-turn transcript, prepared for analysis.
Integration with analysis pipelines
A standalone dialog loop is beneficial for fast experiments, however manufacturing analysis requires capturing traces and feeding them into your evaluator pipeline. The subsequent instance combines ActorSimulator with OpenTelemetry telemetry assortment and Strands Evals session mapping. The duty operate runs a simulated dialog and collects spans from every flip, then maps them right into a structured session for analysis.
from opentelemetry.sdk.hint.export import BatchSpanProcessor
from opentelemetry.sdk.hint.export.in_memory_span_exporter import InMemorySpanExporter
from strands import Agent
from strands_evals import ActorSimulator, Case, Experiment
from strands_evals.evaluators import HelpfulnessEvaluator
from strands_evals.telemetry import StrandsEvalsTelemetry
from strands_evals.mappers import StrandsInMemorySessionMapper
# Setup telemetry for capturing agent traces
telemetry = StrandsEvalsTelemetry()
memory_exporter = InMemorySpanExporter()
span_processor = BatchSpanProcessor(memory_exporter)
telemetry.tracer_provider.add_span_processor(span_processor)
def evaluation_task(case: Case) -> dict:
# Create simulator
user_sim = ActorSimulator.from_case_for_user_simulator(
case=case,
max_turns=3
)
# Create agent
agent = Agent(
system_prompt="You're a useful journey assistant.",
callback_handler=None
)
# Accumulate spans throughout dialog
all_target_spans = []
user_message = case.enter
whereas user_sim.has_next():
memory_exporter.clear()
agent_response = agent(user_message)
agent_message = str(agent_response)
# Seize telemetry
turn_spans = listing(memory_exporter.get_finished_spans())
all_target_spans.prolong(turn_spans)
# Generate subsequent consumer message
user_result = user_sim.act(agent_message)
user_message = str(user_result.structured_output.message)
# Map to session for analysis
mapper = StrandsInMemorySessionMapper()
session = mapper.map_to_session(
all_target_spans,
session_id="test-session"
)
return {"output": agent_message, "trajectory": session}
# Create analysis dataset
test_cases = [
Case(
name="booking-simple",
input="I need to book a flight to Paris next week",
metadata={
"category": "booking",
"task_description": "Flight booking confirmed"
}
)
]
evaluator = HelpfulnessEvaluator()
dataset = Experiment(circumstances=test_cases, evaluator=evaluator)
# Run evaluations
report = Experiment.run_evaluations(evaluation_task)
report.run_display()
This strategy captures full traces of your agent’s habits throughout dialog turns. The spans embody device calls, mannequin invocations, and timing info for each flip within the simulated dialog. By mapping these spans right into a structured session, you make the complete multi-turn interplay obtainable to evaluators like GoalSuccessRateEvaluator and HelpfulnessEvaluator, which might then assess the dialog as an entire, somewhat than remoted turns.
Customized actor profiles for focused testing
Automated profile era covers most analysis situations effectively, however some testing objectives require particular personas. You would possibly wish to confirm that your agent handles an impatient knowledgeable consumer in another way from a affected person newbie, or that it responds appropriately to a consumer with domain-specific wants. For these circumstances, ActorSimulator accepts a totally outlined actor profile that you just management.
from strands_evals.varieties.simulation import ActorProfile
from strands_evals import ActorSimulator
from strands_evals.simulation.prompt_templates.actor_system_prompt import (
DEFAULT_USER_SIMULATOR_PROMPT_TEMPLATE
)
# Outline a customized actor profile
actor_profile = ActorProfile(
traits={
"persona": "analytical and detail-oriented",
"communication_style": "direct and technical",
"expertise_level": "knowledgeable",
"patience_level": "low"
},
context="Skilled enterprise traveler with elite standing who values effectivity",
actor_goal="E-book enterprise class flight with particular seat preferences and lounge entry"
)
# Initialize simulator with customized profile
user_sim = ActorSimulator(
actor_profile=actor_profile,
initial_query="I have to e-book a enterprise class flight to London subsequent Tuesday",
system_prompt_template=DEFAULT_USER_SIMULATOR_PROMPT_TEMPLATE,
max_turns=10
)
By defining traits like persistence degree, communication model, and experience, you possibly can systematically take a look at how your agent performs throughout completely different consumer segments. An agent that scores effectively with affected person, non-technical customers however poorly with impatient consultants reveals a particular high quality hole you can deal with. Operating the identical aim throughout a number of persona configurations turns consumer simulation right into a device for understanding your agent’s strengths and weaknesses by consumer sort.
Greatest practices for simulation-based analysis
These finest practices aid you get essentially the most out of simulation-based analysis:
- Set
max_turnsprimarily based on process complexity, utilizing 3-5 for targeted duties and 8-10 for multi-step workflows. If most conversations attain the restrict with out finishing the aim, enhance it. - Write particular process descriptions that the simulator can consider in opposition to. “Assist the consumer e-book a flight” is simply too obscure to guage completion reliably, whereas “flight reserving confirmed with dates, vacation spot, and value” provides a concrete goal.
- Use auto-generated profiles for broad protection throughout consumer varieties and customized profiles to breed particular patterns out of your manufacturing logs, comparable to an impatient knowledgeable or a first-time consumer.
- Concentrate on patterns throughout your take a look at suite somewhat than particular person transcripts. Constant redirects from the simulated consumer means that the agent is drifting off subject, and declining aim completion charges after an agent change factors to a regression.
- Begin with a small set of take a look at circumstances overlaying your commonest situations and develop to edge circumstances and extra personas as your analysis follow matures.
Conclusion
We confirmed how ActorSimulator in Strands Evals allows systematic, multi-turn analysis of conversational AI brokers by means of reasonable consumer simulation. Reasonably than counting on static take a look at circumstances that seize solely single exchanges, you possibly can outline objectives and personas and let simulated customers work together together with your agent throughout pure, adaptive conversations. The ensuing transcripts feed instantly into the identical analysis pipeline that you just use for single-turn testing, providing you with helpfulness scores, aim success charges, and detailed traces throughout each dialog flip.
To get began, discover the working examples within the Strands Brokers samples repository. For groups evaluating brokers deployed by means of Amazon Bedrock AgentCore, the next AgentCore evaluations pattern exhibit simulate interactions with deployed brokers. Begin with a handful of take a look at circumstances representing your commonest consumer situations, run them by means of ActorSimulator, and consider the outcomes. As your analysis follow matures, develop to cowl extra personas, edge circumstances, and dialog patterns.
Concerning the authors






