TLDR;
AirCaps is bringing AI assistance to real-world conversations.
Our software provides live captions, translations, and proactive AI insights for meetings in your field of view on lightweight AR glasses.
We're used by people with hearing loss, multilingual communicators, and meeting-heavy professionals (healthcare workers, executives, salespeople).
We’ve already transcribed 15,000 hours of in-person conversations and counting. Today, AirCaps assists with 11% of our users’ in-person conversations.
We did $93K in revenue in October and grew 6.5x from September while spending <$3K on marketing. Our power users average 6h+ daily usage and our day-30 retention is 91%.
We previously went viral (75M+ views on TikTok, 150K+ followers) and have been featured by The New Yorker, WIRED, and Forbes.
We’re building the capture and intelligence layer for the 200 billion daily conversations that happen face-to-face.
The Problem
The average person struggles to understand and retain 50% of in-person conversations, forgets 70% within 48 hours, and can't review any of it.
We have 20-30 in-person conversations daily, meaning humanity generates ~200B in-person conversations (on average 10 minutes, generating 36 billion hours of content) every single day.
While virtual meetings have captions, recordings, and hundreds of AI assistants, these in-person conversations remain completely unassisted by technology.
Why?
Our Solution
We're finally bringing AI to real-world conversations with always-on visual assistance in a discreet, socially acceptable, and hands-free form factor: AR glasses.
Our earliest customers were people with hearing loss struggling with $10,000+ hearing aids that fail in noisy environments. AirCaps gives them real-world captions that work where audio amplification breaks down.
We expanded to live translation for people who struggle with language barriers and are used by limited language proficiency workers, multilingual families, and employees at multinational organizations.
We’ve recently been adopted by meeting-heavy professionals - healthcare workers, executives, salespeople - who need real-time AI assistance during high-stakes conversations but can't use screens without breaking eye contact.
AR glasses are the only form factor that works for in-person conversations:
The Team
Madhav and Nirbhay have been obsessed with smart-glasses for 11 years and met at an AR hackathon in the summer of 2024.
Madhav (CEO, Yale CS) built his first Google Glass apps at age 13. He researched audio AI at MIT Media Lab, where he was part of the team that built the world's first live AI-human musical co-performance. His Yale thesis focused on extracting clean transcripts from noisy multi-speaker audio. His inspiration for AirCaps is personal: his friend dropped out of high school because hearing loss made classroom conversations impossible to follow, despite spending thousands on hearing aids.
Nirbhay (CTO, Cornell CS) started building voice AI on smart glasses in high school, developing an emotional support tool for people with Autism. He's previously built voice and conversational AI platforms for therapy as the first hire at several early-stage (including YC) startups.
Why Now
There’s never been a better time for us: AR glasses are going mainstream (Meta, Apple, Google investing billions), AI is becoming faster & more accurate for speech, and people are starting to default to AI assistance for conversations: 3 out of every 4 professionals uses AI notetakers for virtual meetings.
Ask
If you know doctors who use medical transcription services, or organizations that rely on field sales (real estate, home services, retail), please connect us!