Number of items completed.
Tracks how many items have been successfully processed so far in the current operation. This value increments as each item is completed, providing real-time progress indication.
The ratio of completed to total gives the completion percentage:
progress = (completed / total) * 100
Timestamp when the event was created.
ISO 8601 formatted date-time string indicating when this event was emitted by the system. This timestamp is crucial for event ordering, performance analysis, and debugging the agent workflow execution timeline.
Format: "YYYY-MM-DDTHH:mm:ss.sssZ" (e.g., "2024-01-15T14:30:45.123Z")
A unique identifier for the event.
Function calling trial statistics for the operation.
Records the complete trial history of function calling attempts, tracking total executions, successful completions, consent requests, validation failures, and invalid JSON responses. These metrics reveal the reliability and quality of AI agent autonomous operation with tool usage.
Trial statistics are critical for identifying operations where agents struggle with tool interfaces, generate invalid outputs, or require multiple correction attempts through self-healing spiral loops. High failure rates indicate opportunities for system prompt optimization or tool interface improvements.
List of reviewed and validated test scenarios for the target endpoints.
Each scenario has undergone review for quality, correctness, and completeness. The scenarios include validated test scenarios, confirmed dependency functions, and any refinements made during the review process. These represent production-ready test cases that have passed validation.
Current step in the test generation workflow.
Tracks progress through the test creation process, helping coordinate with other pipeline stages and maintain synchronization with the current requirements iteration.
Detailed token usage metrics for the operation.
Contains comprehensive token consumption data including total usage, input token breakdown with cache hit rates, and output token categorization by generation type (reasoning, predictions). This component-level tracking enables precise cost analysis and identification of operations that benefit most from prompt caching or require optimization.
Token usage directly translates to operational costs, making this metric essential for understanding the financial implications of different operation types and guiding resource allocation decisions.
Total number of items to process.
Represents the complete count of operations, files, endpoints, or other entities that need to be processed in the current workflow step. This value is typically determined at the beginning of an operation and remains constant throughout the process.
Used together with the completed field to calculate progress percentage
and estimate time to completion.
Unique identifier for the event type.
A literal string that discriminates between different event types in the AutoBE system. This field enables TypeScript's discriminated union feature, allowing type-safe event handling through switch statements or conditional checks.
Examples: "analyzeWrite", "prismaSchema", "interfaceOperation", "testScenario"
Event fired when the Test agent completes reviewing and validating generated e2e test scenarios.
This event occurs after the Test agent has analyzed the initially generated test scenarios, performed quality checks, validation, and potential refinements. The event provides visibility into the review process results and the finalized structure of test scenarios that have passed validation.
Each reviewed scenario includes validated test scenarios, confirmed dependency chains, and quality assurance checks to ensure the test cases are robust and ready for execution in the testing pipeline.
Author
Michael