Number of items completed.
Tracks how many items have been successfully processed so far in the current operation. This value increments as each item is completed, providing real-time progress indication.
The ratio of completed to total gives the completion percentage:
progress = (completed / total) * 100
Timestamp when the event was created.
ISO 8601 formatted date-time string indicating when this event was emitted by the system. This timestamp is crucial for event ordering, performance analysis, and debugging the agent workflow execution timeline.
Format: "YYYY-MM-DDTHH:mm:ss.sssZ" (e.g., "2024-01-15T14:30:45.123Z")
A unique identifier for the event.
Function calling trial statistics for the operation.
Records the complete trial history of function calling attempts, tracking total executions, successful completions, consent requests, validation failures, and invalid JSON responses. These metrics reveal the reliability and quality of AI agent autonomous operation with tool usage.
Trial statistics are critical for identifying operations where agents struggle with tool interfaces, generate invalid outputs, or require multiple correction attempts through self-healing spiral loops. High failure rates indicate opportunities for system prompt optimization or tool interface improvements.
List of test scenarios generated for the target endpoints.
Each scenario contains the endpoint to test, generated test code draft, and any dependency functions that must be called before the main test. The scenarios represent complete test cases ready for compilation and execution.
Current step in the test generation workflow.
Tracks progress through the test creation process, helping coordinate with other pipeline stages and maintain synchronization with the current requirements iteration.
Detailed token usage metrics for the operation.
Contains comprehensive token consumption data including total usage, input token breakdown with cache hit rates, and output token categorization by generation type (reasoning, predictions). This component-level tracking enables precise cost analysis and identification of operations that benefit most from prompt caching or require optimization.
Token usage directly translates to operational costs, making this metric essential for understanding the financial implications of different operation types and guiding resource allocation decisions.
Total number of items to process.
Represents the complete count of operations, files, endpoints, or other entities that need to be processed in the current workflow step. This value is typically determined at the beginning of an operation and remains constant throughout the process.
Used together with the completed field to calculate progress percentage
and estimate time to completion.
Unique identifier for the event type.
A literal string that discriminates between different event types in the AutoBE system. This field enables TypeScript's discriminated union feature, allowing type-safe event handling through switch statements or conditional checks.
Examples: "analyzeWrite", "prismaSchema", "interfaceOperation", "testScenario"
Event fired when the Test agent generates e2e test scenarios for specific API endpoints.
This event occurs when the Test agent analyzes API endpoints and creates test scenarios that include the main function to test and any dependency functions that need to be called first. The event provides visibility into the test generation progress and the structure of generated test cases.
Each scenario includes draft test code and a clear dependency chain that ensures tests can execute successfully with proper data setup and prerequisites.
Author
Kakasoo