tl;dr
Ponder lets developers add an ultra-realistic Voice AI agent to the bottom right corner of their websites in one line of code.
It can interact with your UI, call functions, access APIs, navigate, basically anything that can be wrapped in a JS function.
Users can talk to it like with a human, for use cases such as faster onboarding, customer support, bookings, data entry, or having a hands-free UX.
You can also build your own UI for the voice agent using our React SDK
The Problem
Voice AI models are having a moment currently, and the ability to simply talk like with a friend has opened up a higher bandwidth way of communicating with software, especially with the rise of prompt-based and conversational workflows, where it fits perfectly.
But if you have to add a voice agent to your website, you have to build your own stack dealing with the async hell of parallel websockets + streaming model calls, voice activity detection, background noise suppression, speaker diarization, TTS, STT, integrating with language models, streaming function calls AND somehow keep the latency to ~400-700 ms
Similar services have made notable advances, but their stack still remains heavily optimized for telephony.
Web app-specific cases, such as handling UI state updates asynchronously, waiting for user action, controlling responses based on function calls, interruption handling (user takes action midway when AI is talking) - while maintaining a fluid & human-like conversational flow remains a challenge
Solution: Meet Ponder
Using Ponder’s React SDK - you can add a voice agent to the bottom right corner of the screen. All you have to do is wrap your _app.js component in PonderProvider
Just with this, you will have the Ponder widget render on your website that users can talk to.
After that, you dynamically control the context and actions the agent can take on each page by using the setActions and setInstructions hooks, anywhere in your app.
Actions can have JavaScript functions that are already part of your code base. Simply pass them to setActions along with a description for the agent. (checkout docs for more details)
You can configure the agent on Ponder’s dashboard, choose a Voice, LLM, and the system prompt. If you want to attach external docs to the context, simply add curly braces variables in the system prompt, and for each variable, pick a data source (currently supports Confluence and Google Docs)
Ponder supports both Voice and Text modes. The messages get populated in the Ponder widget as the conversation unfolds:
Who needs Ponder?
Our Ask 🙏:
If this sounds interesting, reach out to me at sarang@useponder.ai or try out Ponder today at useponder.ai 🚀