1. Overview
Wrtn Technologies is hosting the 1st AutoBE Hackathon.
- Hackathon Information
- Participants: 70 people
- Registration Period: September 5 - 10, 2025
- Event Schedule: September 12 - 14, 2025 (64 hours)
- Start: September 12, 08:00:00 (PDT, UTC-7)
- End: September 14, 23:59:59 (PDT, UTC-7)
- Registration Link: https://forms.gle/8meMGEgKHTiQTrCT7Ā
- Discord Channel: https://discord.gg/aMhRmzkqCxĀ
- AutoBe Information
- Github Repository: https://github.com/wrtnlabs/autobeĀ
- Guide Documents: https://autobe.dev/docsĀ
- Backend Applications Generated by AutoBE:
- To Do List: https://github.com/wrtnlabs/autobe-example-todoĀ
- Reddit Community (fake): https://github.com/wrtnlabs/autobe-example-redditĀ
- Discussion Board: https://github.com/wrtnlabs/autobe-example-discussionĀ
- E-Commerce Platform: https://github.com/wrtnlabs/autobe-example-ecommerceĀ
We want to ask backend developers: Can AI truly replace the work of backend developers? We seek to hear the answer to this question directly from developers working in the field.
AutoBE is an AI-based no-code platform that automatically generates backend applications through natural language conversations. When you discuss requirements with AutoBEās AI chatbot, AutoBE organizes them into a requirements specification, designs database schemas, defines APIs, writes test code, and ultimately implements a backend application that successfully builds. But is the generated code truly production-ready? Perhaps itās just seemingly plausible code fragments that differ from what users actually want?
This hackathon is designed precisely to validate this point. We expect developers with actual backend development experience to use AutoBE firsthand and provide honest, sharp evaluations from an expert perspective. Please tell us whether the backend applications generated by AutoBE truly match what you wanted. Your critical perspective and professional analysis will play a crucial role in developing AutoBE into a better tool.
1.1. Target Participants
This hackathon is open to developers with at least 1 year of practical backend development experience. Weāre looking for developers who can not only write code but also distinguish between good and bad code, evaluate architectural pros and cons, and provide realistic feedback based on actual service operation experience.
We particularly welcome developers with the following experience and interests: those with deep insights into the limitations and possibilities of AI code generation tools, those who have considered how emerging development paradigms will impact the developer community, and above all, those who can objectively evaluate new technologies based on their expertise.
1.2. Event Information
Registration runs from September 5 to 10, 2025, and the actual hackathon will take place for 64 hours (2 days and 16 hours) starting from 8:00 AM on September 12 to just before midnight on September 14. It starts at 08:00:00 on September 12 and ends at 23:59:59 on September 14, Pacific Daylight Time.
The total prize pool is $6,400: $2,000 for the best review, $1,000 for the second-best review, and $50 participation prizes for all participants who participate sincerely and provide meaningful feedback.
1.3. Next Hackathon Preview
This 1st hackathon is limited to 70 participants on a first-come, first-served basis. The main reason for this limitation is AI token usage costs.
AutoBE has so far focused only on unit implementation and testing - whether each agent makes reasonable designs and writes code, and whether the AI-specific compiler works as expected. While this has enabled generation of high-quality backend applications, AI token usage optimization such as RAG (Retrieval-Augmented Generation) has not yet been implemented.
Currently, generating a large-scale e-commerce platform with AutoBE consumes about 150 million tokens, equivalent to $300. With this cost structure, hosting a large-scale hackathon with hundreds of participants would be too burdensome.
But donāt worry. In the next hackathon, weāll introduce RAG technology to dramatically reduce token usage and prepare a grand event where many more developers can participate. This hackathon is the first step.
2. What is AutoBE?
AutoBE is a vibe coding agent for building backend applications, enhanced with AI-friendly compilers.
- Github Repository: https://github.com/wrtnlabs/autobeĀ
- Guide Documents: https://autobe.dev/docsĀ
AutoBE is an AI-based no-code backend generation platform that creates fully functional production-grade backend applications from natural language requirements alone. To solve the fundamental limitation of existing AI code generation tools - that generated code often doesnāt compile or run - we introduced an innovative approach called Compiler-in-the-Loop.
A vibe coding agent exclusively for backend application generation with 100% build success rate (based on OpenAI GPT 4.1) - thatās AutoBE.
2.1. How It Works
AutoBE follows a 5-stage process that reinterprets the traditional software engineering waterfall model for the AI era. Each stage is handled by specialized AI agents, with compilers performing real-time validation at every stage.
The first stage, Analyze Agent, systematically analyzes requirements entered in natural language by users. Rather than simply listing features, it understands business logic, derives various user personas, and defines each userās permissions and roles. During this process, it identifies and clarifies ambiguous or conflicting requirements.
The second stage, Prisma Agent, designs database schemas based on requirements. It identifies relationships between entities, applies appropriate normalization, and establishes indexing strategies. Using Prisma ORMās schema definition language, it generates type-safe data models that are immediately validated by the Prisma compiler.
The third stage, Interface Agent, designs RESTful APIs. It defines HTTP methods, URIs, and request/response formats for each endpoint, generating complete API documentation according to OpenAPI 3.1 specifications. This documentation must pass the AutoBE-specific OpenAPI compiler before proceeding to the next stage.
The fourth stage, Test Agent, writes E2E test code. It creates test scenarios that simulate actual user behavior patterns, including not only normal cases but also edge cases and error situations. Generated test code must be executable and is validated by the test runner.
The final stage, Realize Agent, implements the actual backend code. Based on the NestJS framework, it implements controller, service, and repository layers, automatically handling advanced features like dependency injection, middleware, and guards. The final code must pass TypeScript compiler and NestJS builder.
2.2. Technical Features
AutoBEās most distinctive feature is the integration of specialized compilers at each stage. They validate in real-time whether AI-generated code is syntactically correct, type-consistent, and actually executable. When compilation errors occur, AI immediately receives feedback and modifies the code, repeating this process until complete code is generated.
AutoBEās core competitive advantage lies in the AI-specific compilers independently developed for Prisma, Interface, and Test domains. Unlike general development tools, these compilers deeply understand and are optimized for AI characteristics:
AI-specific Prisma Compiler goes beyond simply validating schema syntax to evaluate the logical consistency and relationship appropriateness of AI-designed data models. It preemptively detects and provides feedback on circular references or unnecessary duplicate relationships that AI tends to generate.
AI-specific Interface Compiler comprehensively validates not only OpenAPI 3.1 spec compliance but also RESTful principle adherence in API design, consistency between endpoints, and completeness of request/response structures. It automatically detects missing authentication headers or error response formats that AI often overlooks.
AI-specific Test Compiler analyzes whether generated test code performs meaningful validation beyond being merely executable. It evaluates test coverage, inclusion of edge cases, and realism of test scenarios to suggest improvement directions to AI.
The biggest differentiator of these compilers is how they communicate with AI. While regular compilers simply say āthereās an error,ā AutoBEās compilers provide detailed feedback on āwhy itās a problemā and āhow to fix itā in a way AI can understand. This close collaboration is the secret to 100% build success rate.
Another innovation of AutoBE is its structured code generation approach based on AST (Abstract Syntax Tree). After analyzing natural language requirements, AI generates data through function calling according to predefined AST structures. Itās as if AI āassemblesā code rather than āwritesā it. The generated AST is validated by each compiler and finally converted into actual usable code.
You can check each compilerās AST structure on GitHub:
- Prisma Compiler:
AutoBePrisma.IApplication
- Interface Compiler:
AutoBeOpenApi.IDocument
- Test Compiler:
AutoBeTest.IFunction
This structured approach ensures consistency and quality of AI-generated code. It also provides a foundation for compilers to validate effectively, making AI and compilers work as a team.
Additionally, AutoBE uses a modern, proven technology stack including TypeScript, NestJS, Prisma ORM, and PostgreSQL/SQLite. This means generated code follows the same standards used in actual production environments.
2.3. Live Demonstration
Weāve prepared actual backend applications generated by AutoBE to prove its capabilities. These arenāt prototypes or demosātheyāre fully functional production-grade applications created entirely through natural language conversations.
From simple todo applications to complex e-commerce platforms, AutoBE has successfully generated various types of backend systems. Each application includes complete implementations with properly structured databases, RESTful APIs, comprehensive test suites, and production-ready code that follows industry best practices.
- Discussion Board: https://github.com/wrtnlabs/autobe-example-bbsĀ
- To Do List: https://github.com/wrtnlabs/autobe-example-todoĀ
- Reddit Community: https://github.com/wrtnlabs/autobe-example-redditĀ
- E-Commerce: https://github.com/wrtnlabs/autobe-example-shoppingĀ
- Requirements Analysis: ReportĀ
- Database Design: Entity Relationship DiagramĀ / Prisma SchemaĀ
- API Design: API ControllersĀ / DTO StructuresĀ
- E2E Test Functions:
test/features/api
- API Impelementations:
src/providers
- AI Review: AI_REVIEW.mdĀ
The process is remarkably simple. Creating a discussion board with AutoBE requires just five natural language commands. No coding knowledge, no technical jargonājust describe what you want:
- I want to create a political/economic discussion board. Since Iām not familiar with programming, please write a requirements analysis report as you see fit.
- Design the database schema.
- Create the API interface specification.
- Make the e2e test functions.
- Implement API functions.
Thatās it. In about 70 minutes, youāll have a complete backend application ready to deploy. This isnāt theoryāitās how we generated all the examples above.
Yes, these demo prompts are ridiculously simple.
But during the hackathon, please donāt just say ādo everything by yourself!ā Actually discuss your requirements in detail with the AI. The better your input, the better your output will be.
3. Purpose of the Hackathon
AutoBE seems theoretically perfect. It has all the elements: systematic processes, compiler validation, modern technology stack. Generated code actually compiles, runs, and passes tests. But we still donāt have an answer to one important question.
Is the backend application generated by AutoBE really what users wanted?
Until now, our development team has focused on whether each component of AutoBE works correctly. Weāve tested whether compilers perform accurate validation, whether agents generate appropriate code, and whether the entire system operates stably. But these were all validations from a technical perspective.
In actual development fields, there are things as important as technical completeness. Is the generated code easy to maintain? Is the architecture scalable? Is performance optimization appropriate? Are there security vulnerabilities? Above all, is it āgood codeā from a developerās perspective?
To answer these questions, we need evaluations from experts with actual backend development experience. While automated code review through AI is possible, we trust insights from human intuition and experience more. Especially evaluations from the perspective of āWhat if I had to take over this code?ā can only be done by actual developers.
3.1. What We Want to Hear
We want specific, practical feedback from you, not simple praise or criticism. We want to know how AutoBEās generated requirements specifications compare to those used in actual projects, whether database design is reasonable long-term, whether API design properly follows RESTful principles, whether test code performs meaningful validation, and whether the final implementation code has production-level quality.
Weāre also curious about how AutoBEās generated code differs from what you would write yourself. Whatās better, whatās lacking, and what direction should we take for improvement?
Above all, we expect your honest evaluation on whether AutoBE is truly a tool that can improve developer productivity or merely an interesting technical demo.
4. Eligibility and Requirements
This hackathon targets developers with at least 1 year of practical experience in backend development. You need experience not just learning backend development but actually developing and operating real services.
Specifically, you need practical experience with at least one of the following technology stacks: Node.js-based Express or NestJS, Java-based Spring Boot, Python-based Django or FastAPI, or similar backend frameworks for actual projects.
Relational database design experience is essential. Beyond simple CRUD operations, you should have experience designing relationships between tables, establishing indexing strategies, and performing query optimization. Experience understanding and applying RESTful API design principles is also important.
English proficiency is required. All conversations with AutoBE are conducted only in English, and all generated code and documentation are written in English. Therefore, you need both conversational ability to communicate naturally in English and reading comprehension to understand technical documentation.
Finally, you need a personal laptop or desktop computer. While AutoBE operates web-based, you must be able to download generated code and run and test it in your local environment.
5. How to Participate
5.1. Registration
https://forms.gle/8meMGEgKHTiQTrCT7Ā
Those who wish to participate in the hackathon should submit an application through Google Forms. The application requires basic personal information along with information about your backend development experience.
This hackathon is limited to 70 participants on a first-come, first-served basis and will close early if 70 people register before the deadline of September 10, 2025. The first 70 eligible applicants can participate.
The application deadline is September 10, 2025, and applications will not be accepted after that. Confirmed participants will be notified individually via email on September 11.
5.2. Account Issuance and Preparation
On September 12, the day before the event, detailed participation instructions will be sent to participantsā individual emails. This email will include unique IDs and passwords to access the AutoBE platform, along with available AI model information.
Weāll also provide a simple guide on how to use AutoBE and contact information for technical assistance during the hackathon. We recommend preparing your local development environment in advance if possible. Itās good to have Node.js, Git, and your preferred code editor installed.
5.3. Hackathon Process
The hackathon starts at 8:00 AM PDT on September 12. Participants log into the AutoBE platform with provided accounts and must generate a total of 2 backend applications using two different AI models (openai/gpt-4.1-mini
and openai/gpt-4.1
).
When using each model, you must create applications with different themes. For example, try a simple todo app for the first model and a more complex e-commerce platform for the second. This is to evaluate each modelās capabilities and limitations from various perspectives.
During generation, please carefully record conversation content with AutoBE, results from each stage, problems encountered and solutions. Taking screenshots or saving logs is also good. These materials will be important evidence when writing reviews later.
5.4. Submission
During the hackathon period (by 23:59:59 PDT on September 14, 2025), you must write and submit detailed review documents. Important: You must submit a separate review for each generated backend application. For example, if you generate 2 applications (one with gpt-4.1-mini
and one with gpt-4.1
), you must write 2 individual reviews. Reviews should be posted to AutoBEās GitHub Discussions at https://github.com/wrtnlabs/autobe/discussions/categories/hackathon-20250912Ā .
Review documents have no specific format requirements but should include sufficiently detailed analysis for each project. Please explain not just āgoodā or ābadā but specifically what parts were good or bad in what ways and why. Do not combine multiple applications into one review - each application deserves its own thorough evaluation.
6. Provided AI Models
6.1. openai/gpt-4.1-mini
This model is a practical choice balancing performance and cost. While large-scale applications are somewhat challenging, it has sufficient capability to generate small to medium-sized backend applications. Itās particularly suitable for systems with about 20 tables and around 150 API endpoints.
It shows excellent performance generating backends for common web services like community boards, blog platforms, and project management tools. It implements not only basic CRUD operations but also common features like user authentication, permission management, and file uploads well. Particularly noteworthy is its strength in the early stages of developmentārequirements analysis and API design. The model excels at understanding your natural language requirements and converting them into well-structured specifications and clean API designs. These outputs are consistently high-quality, making it an excellent choice for project initialization.
However, there are some limitations. It occasionally makes logical errors when implementing complex business logic or fails to completely resolve compilation errors when generating E2E test code. This isnāt due to technical defects in AutoBE itself, but rather the inherent limitations of this lightweight model.
We deliberately provide this model first to demonstrate the importance of model capacity in AI-powered code generationāand to be frank, using only the most powerful models from the start would make hosting this hackathon financially unfeasible. The cost difference between models is substantial, which youāll understand when you later access the more powerful openai/gpt-4.1
.
Despite these limitations, gpt-4.1-mini
still generates practical-level code that helps you understand AutoBEās capabilities. In fact, many developers find a cost-effective workflow by using this model for initial project generation, then refining the code with AI assistants like Claude Code or GitHub Copilot. This hybrid approach leverages the strengths of AutoBEās structured generation while maintaining budget efficiencyāa pragmatic solution for real-world development.
6.2. openai/gpt-4.1
Available only after completing
openai/gpt-4.1-mini
reviewOnce you complete your review of the mini model, youāll immediately gain access to the full-power
openai/gpt-4.1
This is the most powerful AI model currently available, optimized for generating large-scale enterprise-grade backend applications. It can understand and implement complex business logic and handles large systems with over 500 API endpoints and over 1,000 test scenarios without problems.
This modelās strength lies in context understanding. It grasps subtle connections between requirements and can infer implicit requirements. Itās also proficient in implementing advanced features, automatically implementing real-time notification systems, complex permission systems, transaction processing, and caching strategies.
Hereās where the real magic happens: with this model, AutoBE achieves a true 100% build success rate. Every single backend application generated with openai/gpt-4.1
compiles perfectly, passes all tests, and is genuinely production-ready. The difference from the mini model is night and dayāall those compilation errors in test and realize stages? They simply donāt exist here.
But this powerful performance comes at a steep cost. Generating a typical e-commerce platform consumes about 150 million tokens, equivalent to about $300-400. Due to these high costs, we cannot accept unlimited hackathon participants and must carefully manage access to this premium model.
Thatās why we require you to first experience and review openai/gpt-4.1-mini
ānot only does this help us manage costs, but it also gives you valuable perspective on how model capacity impacts code generation quality. Fortunately, once you unlock access, itās provided completely free to hackathon participants, so you can use it freely without any cost concerns.
6.3. qwen/qwen3-235b-a22b-202507
Optional - Just for Fun!
This model is NOT required for the hackathon. Itās included purely for fun and for those curious about local LLM performance!
This is the lightest open-source based model, requiring only laptop-level resources. Weāve included it as a bonus for participants who are curious about how local LLMs perform in code generation tasks compared to commercial cloud models. Think of it as a playground to explore the current state of open-source AI models.
Due to input token limitations, it can only generate small-scale applications but performs sufficiently for simple projects. Itās suitable for generating applications with 5-10 tables and around 20 API endpoints, such as todo apps, memo applications, and simple accounting books. It implements basic CRUD operations and simple business logic without difficulty.
However, this model has significant limitations. It struggles to understand complex requirements and often fails to resolve compilation errors, causing process interruptions. But thatās exactly what makes it interesting! If youāre curious about the performance gap between local open-source models and commercial cloud models, this is your chance to experience it firsthand. Who knows? You might be surprised by what it can (or canāt) do.
7. Evaluation Criteria and Review Writing Guide
7.1. Requirements Analysis Stage Evaluation
When evaluating the requirements specification generated by AutoBEās Analyze Agent, please approach from the following perspectives. First, check how accurately your natural language requirements were understood and documented. Look at whether relationships and priorities between features are clearly defined, not just listing features.
Whether user personas and role definitions are appropriate is also an important evaluation point. Please review whether various user types needed in actual services are all considered and whether each userās permissions and accessible features are logically designed. Also check whether non-functional requirements like performance, security, and scalability were considered.
Please also evaluate document quality. See whether itās written in a structure easy for developers to read and understand, whether there are ambiguous expressions or conflicting content, and whether it has sufficient detail to start actual development.
7.2. Database Design Evaluation
When evaluating database schemas and ERDs generated by Prisma Agent, please use production-readiness as your criterion. Check whether relationships between tables are logically valid and whether there are unnecessary duplications or circular references.
Normalization level is also an important evaluation factor. Please look at whether joins have become complex due to over-normalization or conversely whether thereās potential for data integrity issues due to insufficient normalization. Also check whether primary and foreign key settings for each table are appropriate and whether indexing strategies consider query performance.
Donāt miss details like naming convention consistency, appropriateness of data type selection, and default values and constraint settings. Especially from a scalability perspective, consider whether the structure allows easy schema modifications when future feature additions or changes are needed.
7.3. API Design Evaluation
When evaluating API design generated by Interface Agent, first check RESTful principle compliance. Please see whether HTTP methods are used meaningfully, whether URIs are resource-centric, and whether status codes are properly utilized.
API consistency is also important. Check whether similar function endpoints follow consistent patterns, whether request/response formats are unified, and whether error response structures are standardized. Also look at whether common features like pagination, filtering, and sorting are consistently implemented.
Please also evaluate documentation level. Check whether OpenAPI specs are complete, whether descriptions for each parameter and response field are sufficient, and whether examples are provided clearly. Also important evaluation points are whether authentication/authorization systems are reasonably designed and whether protection measures for sensitive data are in place.
7.4. Test Code Evaluation
When evaluating E2E test code generated by Test Agent, focus on whether it performs meaningful validation. Please check whether it verifies business logic works correctly, not just whether API calls succeed.
Test scenario completeness is also important. Look at whether it sufficiently covers exception situations and edge cases, not just normal use cases. Check whether it reflects actual user behavior patterns well and whether all important user journeys are tested.
Please also evaluate code quality. See whether test function names are clear and understandable, whether test data setup is appropriate, and whether assertions are sufficiently specific. Also important evaluation factors are whether independence between tests is guaranteed and whether causes can be easily identified when tests fail.
7.5. Implementation Code Evaluation
When evaluating final backend code generated by Realize Agent, please use production-level quality as your criterion. Check whether code is readable and understandable, whether appropriate abstraction and modularization are achieved, and whether it follows software design principles like SOLID.
From an architectural perspective, see whether responsibilities between layers are clearly separated, whether dependency injection is properly utilized, and whether the structure allows easy extension and modification. Also check whether error handling is systematic and whether logging is implemented at appropriate levels.
Donāt miss performance and security aspects. Look at whether database queries are efficient, whether there are common performance issues like N+1 problems, and whether there are security vulnerabilities like SQL injection. Also check whether TypeScriptās type system is properly utilized and whether thereās any abuse of the any type.
7.6. Overall Evaluation
After completing individual evaluations for each project, please write an overall evaluation. Share your opinions on AutoBEās overall strengths and weaknesses, what types of projects itās suitable or unsuitable for, and how it should be used in actual development fields.
We especially want practical evaluations on whether thereās actual development time reduction, what level the generated code quality corresponds to (junior, mid-level, or senior developer), and whether you could maintain the code if you had to take it over.
Finally, please provide specific suggestions for how AutoBE should improve. Rather than simply āI wish it generated better code,ā present opinions on specifically what parts should be improved how and what priorities should be set.
8. Prizes and Benefits
8.1. Grand Prize (1 person)
The person who writes the best review will receive $2,000. Weāll select reviews that evaluate AutoBE from a professional, balanced perspective and present specific, actionable improvement suggestions, not simply reviews with more volume or praise.
The grand prize winner will receive $2,000.
8.2. Excellence Award (1 person)
The person who writes the second-best review will receive $1,000. The excellence award will also be selected based on review professionalism and insights.
8.3. Participation Prize (All who meet evaluation criteria)
All who participate sincerely and provide meaningful feedback will receive a $50 participation prize. However, all following conditions must be met:
- Generate projects using both required AI models (
openai/gpt-4.1-mini
andopenai/gpt-4.1
) - Write detailed reviews for each project
- Include all required evaluation elements
- Meet minimum content requirements
8.4. Exclusion Conditions
Participation prizes will not be paid in the following cases:
- Not providing even minimal review feedback
- Writing reviews using AI as proxy
- Not using both required models
- Writing perfunctory or insincere reviews
- Plagiarizing othersā reviews
AI-assisted review writing is not allowed. The core purpose of this hackathon is to collect genuine feedback based on actual backend developersā experience. AutoBEās development requires vivid opinions about inconveniences, improvements, and utilization possibilities felt in practice. Formal reviews generated by AI donāt serve this purpose, so participants found doing so will be excluded from evaluation.
8.5. Judging and Announcement
Judging will proceed for 2 weeks after submission deadline, with results announced via individual email and official website (https://autobe.devĀ ). Judging will be conducted jointly by the AutoBE development team and external experts, comprehensively evaluating review professionalism, specificity, practicality, and balance.
Prizes will be paid within one week after results announcement, and you must submit account information capable of international transfers. Tax issues must be handled directly by recipients, and weāll provide necessary documents.
9. Disclaimer
9.1. Beta Version Limitations
AutoBE is currently in beta, still in pre-release development stage. Therefore itās not perfect and may have various problems and limitations. These are characteristics of the current development state, not bugs, so please understand and participate.
Generated code may not always be optimized and can sometimes be inefficient or unnecessarily complex. Also, compilation or runtime errors may occur in certain situations, and the process may stop without resolving them.
9.2. Use of Generated Code
We donāt recommend using hackathon-generated code in actual production environments. Code generated by AutoBE hasnāt undergone sufficient validation and may contain security vulnerabilities or performance issues.
If you decide to actually use generated code, please use it only after professional code review and security audit. Wrtn Technologies is not responsible for any issues arising from using AutoBE-generated code.
9.3. Open Source and Public Review Notice
AutoBE is an open-source project, and all hackathon reviews will be publicly posted on GitHub Discussions. Therefore, when using the AI chatbot during the hackathon, please be extremely careful not to input any sensitive personal information or business confidential information. Everything you discuss with the AI and all generated code will be part of your public review.
Remember: Your conversations, generated applications, and reviews will be visible to anyone on the internet. Plan your hackathon projects accordingly and avoid using real business ideas or proprietary information.
10. Next Hackathon Plans
10.1. Current Limitations and Improvement Direction
The biggest reason for limiting this 1st hackathon to 70 people is cost. AutoBE has so far focused only on unit implementation and testing - whether each agent makes reasonable designs and writes code, and whether the AI-specific compiler works as expected.
Unfortunately, AI token usage optimization like RAG (Retrieval-Augmented Generation) hasnāt been implemented yet. Currently, generating a medium-scale e-commerce platform with AutoBE consumes about 150 million tokens, equivalent to about $300. With this cost structure, hosting a large-scale hackathon is too burdensome.
10.2. 2nd Hackathon Preparation Plans
Weāre preparing the 2nd hackathon targeting Q4 2025. By then, we plan to complete the following improvements:
Token Usage Optimization
- Introduce RAG technology to efficiently reuse repetitive code patterns and common implementation cases
- Reduce token usage when transferring information between agents through context compression technology
- Prevent duplicate generation for similar requirements by introducing caching mechanisms
Cost Efficiency Improvement
- Target approximately 80% reduction in token usage compared to current
- Aim to successfully generate shopping mall-level large applications even with small models like GPT 4.1 mini
- Optimize to generate same quality backend applications for under $20
10.3. Long-term Vision
Our goal is to create a world where anyone can easily create backend applications through AutoBE. Hackathons are an important process for realizing this vision, and we aim to create a platform that grows together with the developer community.
Weāll actively reflect participantsā feedback from each hackathon to improve AutoBE and develop it into a better tool. Your participation and interest create AutoBEās future.