Number of items completed.
Tracks how many items have been successfully processed so far in the current operation. This value increments as each item is completed, providing real-time progress indication.
The ratio of completed to total gives the completion percentage:
progress = (completed / total) * 100
Revised content based on the amendment plan.
Contains the fully updated markdown document incorporating all review feedback. If the original document was perfect and required no changes, this will be identical to the original content.
This revised content must pass all validation criteria before the document can proceed to the next development phase.
Timestamp when the event was created.
ISO 8601 formatted date-time string indicating when this event was emitted by the system. This timestamp is crucial for event ordering, performance analysis, and debugging the agent workflow execution timeline.
Format: "YYYY-MM-DDTHH:mm:ss.sssZ" (e.g., "2024-01-15T14:30:45.123Z")
Original file content submitted for review.
Contains the markdown document generated by the Analyze Writer Agent, including all sections such as business model, functional requirements, API specifications, and ERD diagrams that need validation.
A unique identifier for the event.
Function calling trial statistics for the operation.
Records the complete trial history of function calling attempts, tracking total executions, successful completions, consent requests, validation failures, and invalid JSON responses. These metrics reveal the reliability and quality of AI agent autonomous operation with tool usage.
Trial statistics are critical for identifying operations where agents struggle with tool interfaces, generate invalid outputs, or require multiple correction attempts through self-healing spiral loops. High failure rates indicate opportunities for system prompt optimization or tool interface improvements.
Amendment plan based on the review feedback.
Outlines specific changes to be made to address review comments. If the original document is perfect and requires no modifications, this field will explicitly state that no changes are needed.
The plan serves as a bridge between review feedback and actual document revision, ensuring all identified issues are addressed.
Detailed review feedback from the Analyze Review Agent.
Contains specific validation results including:
Current iteration number of the requirements analysis being reviewed.
Indicates which version of the requirements analysis is undergoing review and amendment. This step number helps track the iterative refinement process and provides context for understanding how many revision cycles have been completed.
The step value increments with each major revision cycle, allowing stakeholders to understand the evolution of the requirements and the level of refinement that has been applied to achieve the final analysis.
Detailed token usage metrics for the operation.
Contains comprehensive token consumption data including total usage, input token breakdown with cache hit rates, and output token categorization by generation type (reasoning, predictions). This component-level tracking enables precise cost analysis and identification of operations that benefit most from prompt caching or require optimization.
Token usage directly translates to operational costs, making this metric essential for understanding the financial implications of different operation types and guiding resource allocation decisions.
Total number of items to process.
Represents the complete count of operations, files, endpoints, or other entities that need to be processed in the current workflow step. This value is typically determined at the beginning of an operation and remains constant throughout the process.
Used together with the completed field to calculate progress percentage
and estimate time to completion.
Unique identifier for the event type.
A literal string that discriminates between different event types in the AutoBE system. This field enables TypeScript's discriminated union feature, allowing type-safe event handling through switch statements or conditional checks.
Examples: "analyzeWrite", "prismaSchema", "interfaceOperation", "testScenario"
Event fired during the review and amendment phase of the requirements analysis process.
This event represents the quality assurance activity of the Analyze Review Agent, which validates documents produced by the Analyze Writer Agent. The reviewer ensures that requirements documentation meets AutoBE's strict standards before proceeding to the development phases.
The Analyze Review Agent performs comprehensive validation including:
Review outcomes:
Key characteristics of the review process:
The review ensures that subsequent agents (Prisma, Interface, Test, Realize) receive well-structured, unambiguous requirements that enable fully automated backend development without human intervention.
For detailed information about the Analyze Review Agent's validation rules and criteria, refer to packages/agent/prompts/ANALYZE_REVIEW.md
Author
Kakasoo