Roark recently launched!

Launch YC: Roark - The Observability & Testing Platform for Voice AI

"Replay real production calls against your latest agent changes, catch failures, and track sentiment."
TL;DR Roark is an observability and testing platform for Voice AI that shows you whether your agent meets its goals, tracks how customers feel, and lets you replay real calls on your latest changes.

If you’re building voice AI agents and want a faster, smarter way to test and improve them, the founders would love to connect! Email here or book a time here.

Animated GIF

(Replay real calls without picking up the phone.)

Founded by James Zammit & Daniel Gauci

Team

The founders are engineers who have built and scaled complex systems at high-growth companies:

James Zammit (CEO) – Infra and AI engineer with 10+ years of experience. Previously at AngelList, where he worked on core infrastructure as the company scaled from $10B to $124B in assets under management and led the development of Relay, an AI-powered portfolio manager. Co-founded three startups, one of which partnered with Firebase and was showcased at Google I/O 2016.

Daniel Gauci (CTO) – Software engineer with 10+ years of experience. Previously at Akiflow (YC S20) as part of the mobile development team, helping the company reach $1.5M ARR and 10,000+ customers. Spent 7 years at Casumo, leading the development of the mobile app used by millions of players helping the company reach $50M+ ARR.

The Problem: Testing Voice AI Is Painfully Manual

Once a voice agent is live, teams have no easy way to test updates. Every time you tweak a prompt or logic, you have to manually call the bot, hoping to catch issues before customers do.

  • Does the agent follow the right flow? You don’t know unless you re-run conversations by hand.
  • Did a change break something? You won’t find out until users complain.
  • How do customers actually experience the bot? Traditional testing tools only analyze text transcripts, missing tone, hesitation, or frustration.

Voice AI teams, especially in healthcare, legal, and customer support need real-world validation for every change they ship. But existing testing tools rely on scripted test cases that don’t reflect real interactions, leading to blind spots and regressions.

The Solution

Roark lets you replay real production calls against your newest AI logic, so you can test changes before they go live. No more manually dialing your bot or relying on outdated scripted tests - get real-world validation instantly.

How It Works:

  1. Capture real-world calls: Automatically ingest production conversations from your existing voice AI setup (integrates seamlessly with VAPI, Retell, or custom APIs).
  2. Replay calls on your updated agent: The Roark system re-runs the same user inputs, sentiment, and tone against your latest agent, cloning the original caller’s voice for more realistic testing.
  3. Evaluate goal completion: Define key objectives (e.g., “Did the agent confirm insurance?”) and automatically flag failures or missteps.
  4. Monitor sentiment & vocal cues: Detect frustration, long pauses, sighs, and hesitation - signals that text-based evaluations miss.
  5. Track performance with reports & dashboards: Visualize conversation flows, track drop-offs, and measure key metrics with Mixpanel-style analytics.
  6. Get real-time alerts: Set up custom monitoring for compliance violations, negative sentiment spikes, or repeated failures.

Roark gives AI teams the same confidence in testing, iteration, and monitoring that software engineers had for years with modern dev tools.

Check out the demo below!

https://youtu.be/eu8mo28LsTc?feature=shared

Why Roark Was Built

The founders first ran into this problem while building a voice agent for a dental clinic. Patients kept reporting issues, getting stuck in loops, failing to confirm insurance, or receiving irrelevant responses. But the only way to test fixes was to call the bot themselves or read through hundreds of transcripts, hoping to spot patterns. It was frustrating, slow, and unreliable.

After talking to other teams working on Voice AI, they realized this problem was universal - everyone was struggling to validate their AI’s performance efficiently. That’s when the team decided to build Roark.

Learn More

🌐 Visit roark.ai to learn more.

Try it out!

🤝 If your team is tired of manually testing voice AI updates and wants a faster, more reliable way to validate changes, email the founders here  or book a demo here - the team would love for you to try out Roark.

👣 Follow Roark on LinkedIn & X.

Posted 
February 14, 2025
 in 
Launch
 category
← Back to all posts  

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No spam ever. Read our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.