Configuration
import { AutoBeAgent } from "@autobe/agent";
import { AutoBeCompiler } from "@autobe/compiler";
import OpenAI from "openai";
const agent = new AutoBeAgent({
model: "llama", // JSON schema model
vendor: {
// LLM vendor specification
api: new OpenAI({
apiKey: "********",
baseURL: "https://openrouter.ai/api/v1",
}),
model: "meta-llama/llama3.3-70b",
},
config: {
// locale and timezone
locale: "en-US",
timezone: "America/New_York",
},
compiler: new AutoBeCompiler(), // compiler
});
await agent.conversate(`
I want to create a political/economic discussion board.
Since I'm not familiar with programming,
please write a requirements analysis report as you see fit.
`);
When creating an AutoBeAgent
, you can configure several properties that define how the agent operates and connects to LLM services.
The model
property specifies the JSON schema model that corresponds to your chosen LLM vendor. Different vendors implement different schema specifications, so this setting must align with your selected provider. For OpenAI ChatGPT, you would use "chatgpt"
, while Anthropic Claude requires "claude"
, DeepSeek uses "deepseek"
, and Meta Llama uses "llama"
.
The vendor
configuration defines your LLM service connection. The api
property accepts an OpenAI SDK instance that handles the actual API communication, while the model
property specifies the exact model name you want to use from that vendor. This flexible approach allows you to connect to various LLM providers beyond OpenAI by configuring the appropriate base URL and authentication.
The config
section allows you to customize the agentβs behavioral context. The locale
setting determines the language and regional preferences that influence how the agent communicates, while timezone
helps the agent understand geographical context when making decisions or generating responses that involve time-sensitive information.
Finally, the compiler
property provides the necessary compilation tools for TypeScript, Prisma, and OpenAPI operations that the agent requires for code generation and analysis tasks.
Vendor
Meta Llama
import { AutoBeAgent } from "@autobe/agent";
import { AutoBeCompiler } from "@autobe/compiler";
import OpenAI from "openai";
const agent = new AutoBeAgent({
model: "llama",
vendor: {
api: new OpenAI({
apiKey: "********",
baseURL: "https://openrouter.ai/api/v1",
}),
model: "meta-llama/llama3.3-70b",
},
compiler: new AutoBeCompiler(),
});
await agent.conversate(`
I want to create a political/economic discussion board.
Since I'm not familiar with programming,
please write a requirements analysis report as you see fit.
`);
When creating an AutoBeAgent
, you can specify the LLM service vendor of your choice.
AutoBeAgent
utilizes the OpenAI SDK (npm i openai
) for LLM (Large Language Model) service interaction. However, this doesnβt restrict you to using only OpenAIβs GPT models. The OpenAI SDK serves as a connection tool to various LLM vendor APIs, and since most modern LLMs follow OpenAIβs API design patterns, you can seamlessly integrate other LLM vendors such as Claude, DeepSeek, and others.
To use alternative vendors, configure the AutoBeAgent.api.baseURL
and AutoBeAgent.model
properties with the respective vendorβs API endpoint and model name. For example, if you prefer Meta Llama over OpenAI GPT, you can configure it as shown above. Note that since LLM schema models differ between vendors, you must update the IAgenticProps.model
property to match the vendorβs schema model (e.g., "llama"
).
IChatGptSchema
: OpenAI ChatGPTIClaudeSchema
: Anthropic ClaudeIDeepSeekSchema
: High-Flyer DeepSeekIGeminiSchema
: Google GeminiILlamaSchema
: Meta LlamaILlmSchemaV3
: middle layer based on OpenAPI v3.0 specificationILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specification
JSON schema models vary between LLM vendors. When selecting a vendor model, exercise caution when determining the appropriate schema model for the AutoBeAgent.model
property.
OpenAI and Google Gemini use proprietary JSON schema models rather than standard implementations. Google Geminiβs JSON schema specification lacks support for reference types ($ref
) and union types (oneOf
or anyOf
), which are essential for OpenAPI document composition. Consequently, Google Gemini is incompatible with @autobe
.
Other LLM vendors like Claude, DeepSeek, and Meta Llama implement standard JSON schema models, making them fully compatible with @autobe
. When using these vendors, you can configure the AutoBeAgent.model
property with any desired value, including the "chatgpt"
schema model.
However, this approach is not recommended because OpenAIβs JSON schema specification restricts numerous features as listed below. Use the "chatgpt"
schema model only when specifically using OpenAI as your LLM vendor.
Restricted features in OpenAIβs schema:
IJsonSchema.INumber.minimum
IJsonSchema.INumber.maximum
IJsonSchema.INumber.exclusiveMinimum
IJsonSchema.INumber.exclusiveMaximum
IJsonSchema.INumber.multipleOf
IJsonSchema.IString.minLength
IJsonSchema.IString.maxLength
IJsonSchema.IString.format
IJsonSchema.IString.pattern
IJsonSchema.IString.contentMediaType
IJsonSchema.IArray.minItems
IJsonSchema.IArray.maxItems
IJsonSchema.IArray.uniqueItems
Locale and Timezone
import { AutoBeAgent } from "@autobe/agent";
import { AutoBeCompiler } from "@autobe/compiler";
import OpenAI from "openai";
const agent = new AutoBeAgent({
model: "chatgpt",
vendor: {
api: new OpenAI({
apiKey: "********",
}),
model: "gpt-4.1",
},
config: {
locale: "ko-KR",
timezone: "Asia/Seoul",
},
compiler: new AutoBeCompiler(),
});
await agent.conversate(`
I want to create a political/economic discussion board.
Since I'm not familiar with programming,
please write a requirements analysis report as you see fit.
`);
You can configure the locale
and timezone
properties to customize the AI agentβs behavior.
These properties are transmitted to the AI agent, enabling it to consider the userβs locale
and timezone
context. When you configure the locale property to ko-KR
, the AI agent will communicate in Korean.
Similarly, when you set the timezone
property to Asia/Seoul
, the AI agent considers the geographic location and timezone, which can influence LLM function calling argument composition. For instance, if you ask the AI agent to βconsider regulations,β it will reference South Korean regulations.
Compiler
Separation
import { AutoBeAgent } from "@autobe/agent";
import { IAutoBeCompiler } from "@autobe/interface";
import OpenAI from "openai";
import path from "path";
import { WorkerConnector } from "tgrid";
// connect to worker
const worker: WorkerConnector<null, null, IAutoBeCompiler> =
new WorkerConnector(null, null, "process");
await worker.connect(`${__dirname}/compiler${path.extname(__filename)}`);
const agent = new AutoBeAgent({
model: "chatgpt",
vendor: {
api: new OpenAI({ apiKey: "********" }),
model: "gpt-4.1",
},
compiler: worker.getDriver(), // compiler from worker
});
await agent.conversate(`
I want to create a political/economic discussion board.
Since I'm not familiar with programming,
please write a requirements analysis report as you see fit.
`);
When creating an AutoBeAgent
, you must provide a compiler
instance. The compiler
contains Prisma, OpenAPI, and TypeScript compilers. The simplest approach is to create a new instance of the AutoBeCompiler
class.
However, since compiler operations are computationally intensive, if youβre building a service application with AutoBeAgent
that handles multiple connections from numerous users, the compiler can become a performance bottleneck. The AutoBeAgent
server may experience blocking whenever the compiler is occupied with heavy tasks, such as generating Prisma schemas or TypeScript code.
In such scenarios, you can separate the compiler from the main agent and run it in a dedicated worker
or process
. Refer to the βSeparationβ and βWorkerβ tabs above, and follow the provided code snippets to separate the compiler from the main agent.