What started as a 36-hour hackathon project last weekend could empower the open-source community to upend the smart glasses industry. Five team members built a $20 pair of smart glasses, dubbed Open Glass, that connects what you see and hear to an AI chatbot, such as Meta’s Llama 3.
Scott Fitsimones headed into downtown San Francisco on Saturday morning to meet Nik Shevchenko, not knowing he was about to spend the next 36 hours at an AI hackathon building a new device with him. At the time, Fitsimones thought he was picking up an AI pendant made by Shevchenko, who he describes as a ring leader of San Francisco’s growing AI wearable movement. By the end of the weekend, their team won the hackathon and had roughly 1,500 people on a waitlist to pre-order their open-source smart glasses.
“I had no idea about this hackathon, and it was just very serendipitous,” said Fitsimones. “And then, you know, I think we started just jamming and building on the initial prototype.”
For his part, Shevchenko went into the hackathon knowing he wanted to build the hardware element for some sort of smart glasses, according to his teammates (Shevchenko did not respond to Gizmodo’s request for interviews). He was joined by Stepnan Korshakov, who solved the most difficult software challenges of the project. These two picked up Fitsimones, Shreeganesh Ramanan, and Jatin Gupta to make a winning team.
In a large, airy room overlooking the Bay Area’s blue waters and green mountains, software engineers coded away on poofy couches with stacks of La Croix off to the side. Cerebral Valley hosts hackathons like this often, bringing together San Francisco’s thriving AI startup scene. Shevchenko was one of the few using a soldering pen instead of a laptop, while the rest of the team cracked on with the software. At one point late on Saturday, Shevchenko left the event to 3D print the Open Glass computer case.
After roughly 36 hours of hacking, Shevchenko and the team proudly presented a pair of cheap sunglasses with a black box protruding from the right side. The glasses featured a camera that took a picture every five seconds, and a microphone that constantly transcribed audio. This collects a database of photos and text to mirror what your eyes and ears take in. Press a button on the side of the glasses, and you can ask Meta’s Llama 3 chatbot to answer questions about your own life.
“What was that person’s name?”, “Where did I leave my keys?”, and “How many calories are in these fruits?” were some questions the AI answered during the demo. The technology has useful applications for many people, but could especially excel at aiding someone with poor eyesight or bad hearing.
Shevchenko’s team won first prize at the hackathon, despite a bug in the glasses’ speech-to-text abilities during the demo. They earned nods from Meta and Groq executives, as well as Hugging Face CEO Clem Delangue who judged the final projects. Within hours, Shevchenko’s entrepreneurial mind kicked into high gear and he created a waitlist to pre-order a version of the prototype.
“Woah woah woah woah, it’s 1,300,” said Korshakov on the phone with me Monday when discovering how many preorders they’d received. “People around the world want to build things with this. Now they have some way to own and contribute to the success of this project themselves.”
While other smart glasses are on the market today – such as Meta Ray-Bans – they’re not open-sourced or this cheap. Open Glass offers a relatively inexpensive kit that allows developers to pick and choose what LLMs they want to use, and decide what they want the glasses to do. For instance, not every pair of Open Glass has to take pictures or constantly record audio. This offers an affordable, hackable option for smart glasses, a form factor that’s previously been grossly expensive and limiting in what you could do.
“You can plug it into OpenAI, you can plug it into Gemini,” said Ramanan in a phone interview. “It’s a lot about having the ability to mix and match all the best options and then create your own interesting applications and frameworks.”
Smart glasses haven’t quite caught on like other wearables have. However, the advancement of multimodal AI models makes this an exciting time for smart glasses. It’s easy to imagine how OpenAI’s new GPT-4 Omni, which can process video, audio, and text simultaneously, could be used in glasses like this. Google even showed off a prototype of new Google Glasses in a demo of its latest AI on Tuesday. Open Glass hopes that giving the open-source community access to this technology will enable greater innovation in the space.
One thing that’s plagued smart glasses is privacy concerns. Meta’s Ray-Bans are not constantly recording audio and video to use your life like a database, and that’s probably a good thing. But there’s a growing community of AI gadget fanatics in Silicon Valley who are interested in the idea of constantly recording their life to create the ultimate personal assistant. Rings, pendants, and now glasses are sprouting up from startups in San Francisco, all fascinated by this potential.
While open-sourcing the technology will allow developers to innovate on these ideas in a more homegrown way, there are still major concerns to be addressed with privacy and costs. These questions are important but aren’t exactly the focus for developers at the early stages of this technology. What’s more important to them is making it useful.
Some non-technical people may purchase Open Glass to simply use a cheap pair of smart glasses. The team is still hashing out the product, but it seems like it will come pre-baked with large language models built into it and a mobile app that goes with it. The actual price of the device is also subject to change, though all the source code is available for free on GitHub.
The Open Glass story is a testament to the thriving AI startup culture in San Francisco. The open-source community could unlock key breakthroughs in stuck technology like smart glasses. More practically, it also could offer non-technical people a pair of smart glasses for the price of a movie ticket.