Team up, iterate fast, and ship a production-ready LLM application. Your project could be your next startup.
2-5 Students
Collaborate, divide work, iterate together
2 Weeks
4 sprints with demos at each checkpoint
5 Minutes
Pitch to VCs, entrepreneurs, and faculty
36" × 48"
Network with industry leaders
Sound, language, image, video processing and generation
Scraping, form filling, document processing
Domain-specific AI assistants and interns
Content rewriting, adaptive experiences
Code generation, testing, documentation
Document processing in regulated industries
Public sector automation and analysis
Conversational interfaces and voice agents
Head to the #finding-teams channel in Slack to pitch your ideas and find collaborators who share your vision.
Open SlackA memorable name for your project
Names and Stanford email addresses
Link to repo and GitHub handles of all team members
What your project does in one sentence
Problem statement, your solution, why existing solutions fall short, and how you address the gap
Week-by-week or sprint-by-sprint plan (we know these will change!)
A GPT-powered indoor navigation assistant that provides real-time conversational guidance to visually impaired users.
VisionaryAI combines real-time computer vision with LLMs to help visually impaired individuals navigate indoor environments. The system performs live object detection while a GPT-based dialogue layer generates spoken navigation instructions.
Existing solutions rely on static maps or rigid rule-based systems. VisionaryAI provides a conversational, vision-based assistant capable of reasoning over live camera input.
Use of AI coding tools (GitHub Copilot, Claude, ChatGPT, Cursor, etc.) is actively encouraged. These tools represent the modern reality of software development.
Watch last year's Demo Day to see the caliber of projects coming out of CS 224G.