Array of multimodal content items that comprise the user's message.
Contains the diverse AutoBeUserMessageContent elements that make up the user's communication, which can include text descriptions, images for visual references, document files for specifications, and audio for voice input. Each content item represents a different modality or attachment within the same message.
The multimodal array structure allows users to combine multiple content types in a single message, such as text requirements accompanied by reference images or documents. This comprehensive input approach provides rich context for the AI agents to better understand user intentions and generate more accurate development artifacts throughout the vibe coding process.
Timestamp when the event was created.
ISO 8601 formatted date-time string indicating when this event was emitted by the system. This timestamp is crucial for event ordering, performance analysis, and debugging the agent workflow execution timeline.
Format: "YYYY-MM-DDTHH:mm:ss.sssZ" (e.g., "2024-01-15T14:30:45.123Z")
A unique identifier for the event.
Unique identifier for the event type.
A literal string that discriminates between different event types in the AutoBE system. This field enables TypeScript's discriminated union feature, allowing type-safe event handling through switch statements or conditional checks.
Examples: "analyzeWrite", "prismaSchemas", "interfaceOperations", "testScenarios"
Event fired when a user sends a message during the conversation flow.
This event represents real-time user input in the vibe coding conversation, capturing the multimodal communication from users that drives the development process. User messages can include text descriptions, visual references, document uploads, and voice input that provide the requirements, feedback, and guidance necessary for the AI agents to generate accurate software solutions.
The user message events serve as the primary input source for the entire vibe coding pipeline, enabling users to express their needs through multiple modalities and maintain interactive control over the automated development process as it progresses through different phases.
Author
Samchon