Number of items completed.
Tracks how many items have been successfully processed so far in the current operation. This value increments as each item is completed, providing real-time progress indication.
The ratio of completed to total gives the completion percentage:
progress = (completed / total) * 100
Timestamp when the event was created.
ISO 8601 formatted date-time string indicating when this event was emitted by the system. This timestamp is crucial for event ordering, performance analysis, and debugging the agent workflow execution timeline.
Format: "YYYY-MM-DDTHH:mm:ss.sssZ" (e.g., "2024-01-15T14:30:45.123Z")
A unique identifier for the event.
Function calling trial statistics for the operation.
Records the complete trial history of function calling attempts, tracking total executions, successful completions, consent requests, validation failures, and invalid JSON responses. These metrics reveal the reliability and quality of AI agent autonomous operation with tool usage.
Trial statistics are critical for identifying operations where agents struggle with tool interfaces, generate invalid outputs, or require multiple correction attempts through self-healing spiral loops. High failure rates indicate opportunities for system prompt optimization or tool interface improvements.
The reviewed component with updated table list.
Contains the complete component definition after review, with all revisions applied. Tables that were added, removed, or renamed are reflected in this final component structure used for schema generation.
Comprehensive review analysis of the component organization.
Contains the AI agent's detailed evaluation of the component structure including validation of table completeness, domain fitness, dependency ordering, and naming conventions. The review identifies potential issues and confirms adherence to domain-driven design principles.
Review Dimensions:
Array of table revision operations applied during the review.
Contains all create, update, and erase operations that were identified and applied during the review process. Each operation includes a reason explaining why the change was necessary, providing a clear audit trail of modifications.
Iteration number of the requirements analysis this component review was performed for.
Indicates which version of the requirements analysis this review reflects. This step number ensures that the component review is aligned with the current requirements and helps track the evolution of database architecture as business requirements change.
Detailed token usage metrics for the operation.
Contains comprehensive token consumption data including total usage, input token breakdown with cache hit rates, and output token categorization by generation type (reasoning, predictions). This component-level tracking enables precise cost analysis and identification of operations that benefit most from prompt caching or require optimization.
Token usage directly translates to operational costs, making this metric essential for understanding the financial implications of different operation types and guiding resource allocation decisions.
Total number of items to process.
Represents the complete count of operations, files, endpoints, or other entities that need to be processed in the current workflow step. This value is typically determined at the beginning of an operation and remains constant throughout the process.
Used together with the completed field to calculate progress percentage
and estimate time to completion.
Unique identifier for the event type.
A literal string that discriminates between different event types in the AutoBE system. This field enables TypeScript's discriminated union feature, allowing type-safe event handling through switch statements or conditional checks.
Examples: "analyzeWrite", "databaseSchema", "interfaceOperation", "testScenario"
Event fired when the Database agent reviews and validates the component organization during the database design process.
This event occurs after the initial component extraction phase, where the Database agent has organized tables into domain-based groups. The review validates the component organization against business requirements, checks for missing or duplicated tables, verifies domain fitness, and ensures proper dependency ordering between components.
The review process ensures that the component structure provides a solid foundation for schema generation by identifying and correcting organizational issues before detailed table schemas are created.
Author
Michael