CAMLIN SPEECH

The Shared Speech Layer For Contact And Beyond

Camlin Speech handles recognition, transcription, prompting, validation, and structured capture across Camlin Contact and the wider platform. Multi-engine by design, it lets teams choose the right speech stack per channel, flow, or use case.
5
Engine Options
Recognition stack choices
8
Recognition Modes
Free-form to structured
16
Entity Types
Validation-ready capture

Camlin Speech

Recognition, prompting, and validation

PROCESSING
Recognition result

Recognition + prompting
Capture, validate, and respond in one layer
Structured output
Entity extraction and confidence-aware handling
📞
Contact
🧭
Architect
🧩
API/Avatar
SHARED CAPABILITIES

Speech is the reusable layer beneath multiple products

Recognition, prompting, transcription, and structured capture live here so Contact and the wider platform do not have to solve the same speech problem in separate ways.

Multi-engine recognition

Choose the right recognition engine for the channel, journey, or environment instead of tying every use case to a single vendor path.

Prompting and response handling

Speech supports the prompt layer that Contact and other platform experiences use for voice turns, confirmations, and guided capture.

Structured capture and validation

Move from free-form recognition into validated business data with confidence-aware handling, entity capture, and guided re-prompt patterns.

Architect-ready configuration

Speech settings can be applied where journeys are designed, then reused consistently when those journeys are compiled across runtime channels.

Channel support beyond voice

Speech supports Contact first, but it can also serve broader platform surfaces wherever recognition, prompting, or structured capture are needed.

One shared layer

Keeping speech shared avoids duplicated recognition logic and helps teams govern prompts, capture rules, and tuning decisions in one place.

PLATFORM USE

Speech serves Contact first and the broader platform next

This is a shared layer, not a competing front-end product. It is most visible through Contact, while still serving Architect-designed journeys and other voice-enabled surfaces.

Camlin Contact

Speech powers the recognition, prompting, and capture layer used inside the Contact operating model and the Voice runtime nested within it.

Camlin Architect

Architect defines how journeys should behave. Speech provides the reusable recognition and prompt capabilities those journeys rely on at runtime.

Avatar and other surfaces

Where voice or structured speech input matters outside Contact, the same shared layer can support visible assistants and broader platform experiences.

SPEECH BUILDING BLOCKS

The layer is specific enough to be useful and broad enough to be reusable

5 STT engines
Recognition options

Google, Deepgram, Azure, AWS Transcribe, and Whisper are all available through the shared speech layer.

•Per-flow choice
•Per-channel choice
•Unified handling model
8 recognition modes
Capture patterns

Free-form, structured, DTMF, grammar, and hybrid patterns help teams tune input collection to the moment rather than forcing one style everywhere.

•Guided capture
•Hybrid input paths
•Confidence-based re-prompting
16 entity types
Validation-ready output

Dates, phone numbers, addresses, identity fields, and other business data can be handled in a more structured way once speech has been captured.

•Business-ready extraction
•Validation-aware capture
•Reusable platform logic
GET STARTED

Put the speech layer in the right place

Walk through how Speech supports Contact, where it connects to Architect, and how to keep voice capture and prompting consistent across the platform.