Skip to Content

Local Setup

Terminal
git clone https://github.com/wrtnlabs/autobe cd autobe pnpm install pnpm run playground

Clone the repository and run the playground application locally. This allows you to chat with the @autobe agent, manage multiple sessions, and use various LLM providers including local models.

After installation, the playground will be available at http://localhost:5713Β . You can interact with AutoBE through a chat interface - simply describe what you want to build, and AutoBE will generate the backend application for you.


To see examples of backend applications generated by @autobe, explore these repositories. These showcase @autobe’s ability to generate production-ready backend code with proper structure, API documentation, TypeScript interfaces, and e2e test functions.

  1. Discussion Board: https://github.com/wrtnlabs/autobe-example-bbsΒ 
  2. To Do List: https://github.com/wrtnlabs/autobe-example-todoΒ 
  3. Reddit Community: https://github.com/wrtnlabs/autobe-example-redditΒ 
  4. E-Commerce: https://github.com/wrtnlabs/autobe-example-shoppingΒ 

If you’re unsure what to try, start with this example script to create a political/economic discussion board:

  1. I want to create a political/economic discussion board. Since I’m not familiar with programming, please write a requirements analysis report as you see fit.
  2. Design the database schema.
  3. Create the API interface specification.
  4. Make the e2e test functions.
  5. Implement API functions.

Installation

git clone https://github.com/wrtnlabs/autobe cd autobe pnpm install pnpm run playground

You can setup playground like application on your local machine.

Clone this @autobe repository and run the playground script after installing dependencies with pnpm install. This will start a local server that you can access to interact with the @autobe agent.

WebSocket Server

import { AutoBeAgent } from "@autobe/agent"; import { AutoBePlaygroundServer } from "@autobe/playground-server"; import { AutoBeCompiler } from "@autobe/compiler"; import OpenAI from "openai"; const server = new AutoBePlaygroundServer({ predicate: async (acceptor) => { return { type: "accept", agent: new AutoBeAgent({ vendor: { api: new OpenAI({ apiKey: "********" }), model: "gpt-4.1", }, model: "chatgpt", compiler: new AutoBeCompiler(), }), cwd: `${__dirname}/../playground-result`, } }, }); await server.listen(3_000);

You can serve the @autobe agent as a WebSocket server like above.

About detailed information, please refer to the Guide Documents > WebSocket Protocol page.

Last updated on