Glyss - AI User Researcher
The Problem
Our team has been conducting too few qualitative interviews due to limited staff capacity, time constraints, and zero budget for external tools. We often defaulted to quantitative research, which led to assumptions, product gaps, and costly rework after deployment. We needed a scalable, resource-light way to run more user interviews without overburdening the team.
Industry engagement ~5%
Apps lacked empathy & personalization
Rising unmet need for mental health support


The Idea & The Approach
If we couldn’t scale research with people, we could scale it with AI as a moderator. Instead of manually preparing guides, scheduling interviews, and conducting them one by one, what if we created an AI-powered interviewer that could
Generate tailored interview scripts in minutes,
Conduct conversations in a natural, voice-driven way, and
Synthesize results automatically across dozens of sessions.
This idea reframed the challenge: the barrier wasn’t just time but manual bottlenecks. By removing those bottlenecks with AI, qualitative research could become as fast and scalable as quantitative surveys—without losing the human touch.
I explored AI-enabled solutions and saw potential in Lovable with Supabase, a no-code app builder we had used for simpler tasks. Instead of building a complex tool internally (which was unrealistic with engineering constraints), I designed and built a custom AI-driven interview platform directly in Lovable, later integrating Claude Code to power more advanced functionality.


The Solution
The platform streamlined the entire qualitative research process:
AI Interview Guide Generation: Eliminated 5–6 hours of manual prep per research cycle by auto-generating tailored interview guides. Researchers only needed to review, not create from scratch.
Configurable AI Interviewer Personas: Set interviewer characteristics (tone, style, role) to match the research context.
Voice & Chat Interviews: Participants could engage via text or voice. ~70% preferred voice, finding it more natural and conversational.
Automated Synthesis & Analysis: AI summarized results across dozens of interviews, highlighting themes, gaps, and emerging insights that even experienced researchers may have overlooked.
I built the first iteration solo (architecture, structure, prompt engineering), tested it internally, and then collaborated with the team to polish the UI and add refinements.


The Impact
10x scale-up: Increased interviews from ~5 per month to 45–50 within two months.
Time savings: Eliminated hours of manual prep per cycle, freeing significant team capacity for deeper analysis.
Richer insights: Automated synthesis surfaced unbiased patterns and uncovered hidden research gaps, directly shaping product direction.
Adoption: Both researchers and participants embraced the tool, with voice-based AI interviews delivering higher engagement and more natural feedback.

