
Succeeded to build full-level backend application with "qwen3-235b-a22b" in AutoBE
Achievement Summary
We successfully generated a complete backend application without compilation errors using the qwen3-235b-a22b model, consisting of 10 API functions and 37 DTO schemas. This represents the inaugural success with this particular model configuration.
Development Progress
AutoBE (an open-source platform for AI-powered backend application development with specialized compilers) continues undergoing enhancement testing. We anticipate generating increasingly complex applications β potentially Reddit-style communities with approximately 200 API functions β within the coming month.
Model Performance Analysis
Testing revealed that qwen3-30b-a3b struggles with DTO type definitions despite producing professional requirement analyses and database designs. Given its smaller scale, extensive optimization efforts were deemed unnecessary.
Cost Considerations
Generating Amazon-level shopping platforms currently requires roughly 150 million tokens via gpt-4.1, costing approximately $450. Local LLM alternatives like qwen3-235b-a22b present economically viable pathways when combined with RAG optimization strategies.
Hackathon Integration
Due to qwen3-235b-a22bβs promising results, the AutoBE hackathon β initially supporting only gpt-4.1 variants β urgently incorporated this model into the competition framework. We invite developers interested in AI-assisted backend development to participate.
Future Direction
We plan systematic testing across multiple local LLMs, publishing findings regularly. Whenever exceptional backend-coding capabilities emerge, recurring hackathons will be scheduled to aggregate diverse implementation case studies.