Skip to Content

Configuration

main.config.ts
import { AutoBeAgent } from "@autobe/agent"; import { AutoBeCompiler } from "@autobe/compiler"; import OpenAI from "openai"; const agent = new AutoBeAgent({ model: "llama", // JSON schema model vendor: { // LLM vendor specification api: new OpenAI({ apiKey: "********", baseURL: "https://openrouter.ai/api/v1", }), model: "meta-llama/llama3.3-70b", }, config: { // locale and timezone locale: "en-US", timezone: "America/New_York", }, compiler: new AutoBeCompiler(), // compiler }); await agent.conversate(` I want to create a political/economic discussion board. Since I'm not familiar with programming, please write a requirements analysis report as you see fit. `);

When creating an AutoBeAgent, you can configure several properties that define how the agent operates and connects to LLM services.

The model property specifies the JSON schema model that corresponds to your chosen LLM vendor. Different vendors implement different schema specifications, so this setting must align with your selected provider. For OpenAI ChatGPT, you would use "chatgpt", while Anthropic Claude requires "claude", DeepSeek uses "deepseek", and Meta Llama uses "llama".

The vendor configuration defines your LLM service connection. The api property accepts an OpenAI SDK instance that handles the actual API communication, while the model property specifies the exact model name you want to use from that vendor. This flexible approach allows you to connect to various LLM providers beyond OpenAI by configuring the appropriate base URL and authentication.

The config section allows you to customize the agent’s behavioral context. The locale setting determines the language and regional preferences that influence how the agent communicates, while timezone helps the agent understand geographical context when making decisions or generating responses that involve time-sensitive information.

Finally, the compiler property provides the necessary compilation tools for TypeScript, Prisma, and OpenAPI operations that the agent requires for code generation and analysis tasks.

Vendor

main.llama.ts
import { AutoBeAgent } from "@autobe/agent"; import { AutoBeCompiler } from "@autobe/compiler"; import OpenAI from "openai"; const agent = new AutoBeAgent({ model: "llama", vendor: { api: new OpenAI({ apiKey: "********", baseURL: "https://openrouter.ai/api/v1", }), model: "meta-llama/llama3.3-70b", }, compiler: new AutoBeCompiler(), }); await agent.conversate(` I want to create a political/economic discussion board. Since I'm not familiar with programming, please write a requirements analysis report as you see fit. `);

When creating an AutoBeAgent, you can specify the LLM service vendor of your choice.

AutoBeAgent utilizes the OpenAI SDK (npm i openai) for LLM (Large Language Model) service interaction. However, this doesn’t restrict you to using only OpenAI’s GPT models. The OpenAI SDK serves as a connection tool to various LLM vendor APIs, and since most modern LLMs follow OpenAI’s API design patterns, you can seamlessly integrate other LLM vendors such as Claude, DeepSeek, and others.

To use alternative vendors, configure the AutoBeAgent.api.baseURL and AutoBeAgent.model properties with the respective vendor’s API endpoint and model name. For example, if you prefer Meta Llama over OpenAI GPT, you can configure it as shown above. Note that since LLM schema models differ between vendors, you must update the IAgenticProps.model property to match the vendor’s schema model (e.g., "llama").

JSON schema models vary between LLM vendors. When selecting a vendor model, exercise caution when determining the appropriate schema model for the AutoBeAgent.model property.

OpenAI and Google Gemini use proprietary JSON schema models rather than standard implementations. Google Gemini’s JSON schema specification lacks support for reference types ($ref) and union types (oneOf or anyOf), which are essential for OpenAPI document composition. Consequently, Google Gemini is incompatible with @autobe.

Other LLM vendors like Claude, DeepSeek, and Meta Llama implement standard JSON schema models, making them fully compatible with @autobe. When using these vendors, you can configure the AutoBeAgent.model property with any desired value, including the "chatgpt" schema model.

However, this approach is not recommended because OpenAI’s JSON schema specification restricts numerous features as listed below. Use the "chatgpt" schema model only when specifically using OpenAI as your LLM vendor.

Restricted features in OpenAI’s schema:

  • IJsonSchema.INumber.minimum
  • IJsonSchema.INumber.maximum
  • IJsonSchema.INumber.exclusiveMinimum
  • IJsonSchema.INumber.exclusiveMaximum
  • IJsonSchema.INumber.multipleOf
  • IJsonSchema.IString.minLength
  • IJsonSchema.IString.maxLength
  • IJsonSchema.IString.format
  • IJsonSchema.IString.pattern
  • IJsonSchema.IString.contentMediaType
  • IJsonSchema.IArray.minItems
  • IJsonSchema.IArray.maxItems
  • IJsonSchema.IArray.uniqueItems

Locale and Timezone

main.korea.ts
import { AutoBeAgent } from "@autobe/agent"; import { AutoBeCompiler } from "@autobe/compiler"; import OpenAI from "openai"; const agent = new AutoBeAgent({ model: "chatgpt", vendor: { api: new OpenAI({ apiKey: "********", }), model: "gpt-4.1", }, config: { locale: "ko-KR", timezone: "Asia/Seoul", }, compiler: new AutoBeCompiler(), }); await agent.conversate(` I want to create a political/economic discussion board. Since I'm not familiar with programming, please write a requirements analysis report as you see fit. `);

You can configure the locale and timezone properties to customize the AI agent’s behavior.

These properties are transmitted to the AI agent, enabling it to consider the user’s locale and timezone context. When you configure the locale property to ko-KR, the AI agent will communicate in Korean.

Similarly, when you set the timezone property to Asia/Seoul, the AI agent considers the geographic location and timezone, which can influence LLM function calling argument composition. For instance, if you ask the AI agent to β€œconsider regulations,” it will reference South Korean regulations.

Compiler

main.separate.ts
import { AutoBeAgent } from "@autobe/agent"; import { IAutoBeCompiler } from "@autobe/interface"; import OpenAI from "openai"; import path from "path"; import { WorkerConnector } from "tgrid"; // connect to worker const worker: WorkerConnector<null, null, IAutoBeCompiler> = new WorkerConnector(null, null, "process"); await worker.connect(`${__dirname}/compiler${path.extname(__filename)}`); const agent = new AutoBeAgent({ model: "chatgpt", vendor: { api: new OpenAI({ apiKey: "********" }), model: "gpt-4.1", }, compiler: worker.getDriver(), // compiler from worker }); await agent.conversate(` I want to create a political/economic discussion board. Since I'm not familiar with programming, please write a requirements analysis report as you see fit. `);

When creating an AutoBeAgent, you must provide a compiler instance. The compiler contains Prisma, OpenAPI, and TypeScript compilers. The simplest approach is to create a new instance of the AutoBeCompiler class.

However, since compiler operations are computationally intensive, if you’re building a service application with AutoBeAgent that handles multiple connections from numerous users, the compiler can become a performance bottleneck. The AutoBeAgent server may experience blocking whenever the compiler is occupied with heavy tasks, such as generating Prisma schemas or TypeScript code.

In such scenarios, you can separate the compiler from the main agent and run it in a dedicated worker or process. Refer to the β€œSeparation” and β€œWorker” tabs above, and follow the provided code snippets to separate the compiler from the main agent.

Last updated on