Home
Fleeting started from a personal frustration: I was constantly having ideas, insights, and thoughts throughout the day that I'd forget within hours. Traditional note-taking apps felt too rigid, too slow, or required too much mental overhead to be useful in the moment.
The core insight was that memory isn't about perfect recall—it's about finding connections. When I have a vague memory of discussing something with a friend months ago, I don't need to remember the exact words. I need to find the context and connections that help me reconstruct the thought.
Technically, this is a fascinating problem that sits at the intersection of mobile development, cloud infrastructure, and AI. The Flutter app handles real-time audio recording with automatic transcription using OpenAI's Whisper API. The transcribed text gets embedded using OpenAI's text embedding models and stored as vectors in Supabase's PGVector extension.
The real magic happens in the retrieval system. When users ask questions like 'what was I thinking about regarding that new project idea?', the app embeds the query, performs semantic search across the vector database, and uses the retrieved context to generate relevant responses.
What I'm learning: Building AI-first applications requires fundamentally different thinking about user interfaces. The app needs to be invisible when capturing information but intelligent when surfacing it.
Key learnings
- Vector databases and semantic search at scale
- Real-time audio processing in Flutter
- RAG architecture patterns
- Privacy-first AI application design
◐