Skip to content

Evaluating Remote Traces

Trace providers fetch agent execution data from observability backends and convert it into the format the evaluation pipeline expects. This lets you run evaluators against traces from production or staging agents without re-running them.

ProviderBackendAuth
CloudWatchProviderAWS CloudWatch Logs (Bedrock AgentCore runtime logs)AWS credentials (boto3)
LangfuseProviderLangfuseAPI keys

The CloudWatch provider works out of the box since boto3 is a core dependency:

Terminal window
pip install strands-agents-evals

For the Langfuse provider, install the optional langfuse extra:

Terminal window
pip install strands-agents-evals[langfuse]

The CloudWatchProvider queries CloudWatch Logs Insights to retrieve OpenTelemetry log records from Bedrock AgentCore runtime log groups.

from strands_evals.providers import CloudWatchProvider
# Option 1: Provide the log group directly
provider = CloudWatchProvider(
log_group="/aws/bedrock-agentcore/runtimes/my-agent-abc123-DEFAULT",
region="us-east-1",
)
# Option 2: Discover the log group from the agent name
provider = CloudWatchProvider(agent_name="my-agent", region="us-east-1")

You must provide either log_group or agent_name. When using agent_name, the provider calls describe_log_groups to find the runtime log group automatically.

The region parameter falls back to the AWS_REGION environment variable, then AWS_DEFAULT_REGION, then us-east-1.

ParameterDefaultDescription
regionAWS_REGION env varAWS region for the CloudWatch client
log_groupFull CloudWatch log group path
agent_nameAgent name used to discover the log group
lookback_days30How many days back to search for traces
query_timeout_seconds60.0Maximum seconds to wait for a Logs Insights query

The LangfuseProvider fetches traces and observations via the Langfuse Python SDK, converting them to typed spans for evaluation.

from strands_evals.providers import LangfuseProvider
# Reads LANGFUSE_PUBLIC_KEY / LANGFUSE_SECRET_KEY from env by default
provider = LangfuseProvider()
# Or pass credentials explicitly
provider = LangfuseProvider(
public_key="pk-...",
secret_key="sk-...",
host="https://us.cloud.langfuse.com",
)
ParameterDefaultDescription
public_keyLANGFUSE_PUBLIC_KEY env varLangfuse public API key
secret_keyLANGFUSE_SECRET_KEY env varLangfuse secret API key
hostLANGFUSE_HOST env var or https://us.cloud.langfuse.comLangfuse API host URL
timeout120Request timeout in seconds

All providers implement the same TraceProvider interface with a single method:

data = provider.get_evaluation_data(session_id="my-session-id")
# data.output -> str (final agent response)
# data.trajectory -> Session (traces and spans)

Pass the provider’s data into the standard Experiment pipeline by wrapping it in a task function:

from strands_evals import Case, Experiment
from strands_evals.evaluators import CoherenceEvaluator, OutputEvaluator
from strands_evals.providers import CloudWatchProvider
provider = CloudWatchProvider(log_group="/aws/...", region="us-east-1")
def task(case: Case) -> dict:
return provider.get_evaluation_data(case.input)
cases = [
Case(
name="session_1",
input="my-session-id",
expected_output="any",
),
]
evaluators = [
OutputEvaluator(
rubric="Score 1.0 if the output is coherent. Score 0.0 otherwise."
),
CoherenceEvaluator(),
]
experiment = Experiment(cases=cases, evaluators=evaluators)
reports = experiment.run_evaluations(task)
for report in reports:
print(f"{report.overall_score:.2f} - {report.reasons}")

The same pattern works with LangfuseProvider — just swap the provider initialization.

Providers raise specific exceptions when traces cannot be retrieved:

from strands_evals.providers import SessionNotFoundError, ProviderError
try:
data = provider.get_evaluation_data("unknown-session")
except SessionNotFoundError:
print("No traces found for that session")
except ProviderError:
print("Provider unreachable or query failed")

Both exceptions inherit from TraceProviderError, so you can catch that for a single handler:

from strands_evals.providers import TraceProviderError
try:
data = provider.get_evaluation_data(session_id)
except TraceProviderError as e:
print(f"Failed to retrieve traces: {e}")

Subclass TraceProvider and implement get_evaluation_data to integrate with any observability backend:

from strands_evals.providers import TraceProvider
from strands_evals.types.evaluation import TaskOutput
class MyProvider(TraceProvider):
def get_evaluation_data(self, session_id: str) -> TaskOutput:
# 1. Fetch traces from your backend
# 2. Convert to a Session object with Trace and Span types
# 3. Extract the final agent response
return TaskOutput(output="final response", trajectory=session)

The returned TaskOutput must contain:

  • output: The final agent response text
  • trajectory: A Session object containing Trace objects with typed spans (AgentInvocationSpan, InferenceSpan, ToolExecutionSpan)