✨ inVibe AI
Our Approach to Artificial Intelligence
For over a decade, inVibe has leveraged AI to elevate healthcare market research. We were early adopters of large language models (LLMs), incorporating GPT-3 during its private beta back in 2021. Today, our AI ecosystem powers advanced speech analytics, thematic analysis, and insight generation at scale. Below, we outline how inVibe AI works, our model providers, our approach to data privacy and retention, how we evaluate AI outputs, and how we ensure responsible and ethical use of these cutting edge tools.
What is inVibe AI?
inVibe AI is our suite of AI-driven features that analyze voice responses, transcripts, and market research data to speed up data analysis and uncover deeper insights. We blend human expertise with AI-powered models to deliver:
- Machine transcription of voice responses
- Acoustic analytics of voice responses
- Summaries and key findings of your data
- AI-assisted data exploration via AI chat
- AI-assisted presentations that communicate key insights in video form
From pinpointing key themes to surfacing nuanced sentiments, inVibe AI helps our clients act on actionable findings based on their inVibe study data.
Our Model Providers
We partner with trusted, industry-leading AI model providers.
1. OpenAI via their zero-retention API offering
2. Anthropic through Amazon Web Services (AWS) Bedrock
3. audEERING, a leader in voice analysis AI technology
OpenAI
- We use a variety of models and services provided by OpenAI. This includes speech to text (machine transcription) of voice responses, embeddings of answer transcripts (a semantic representation of answer transcript text), and many of the leading large language models available (o1, o3-mini, gpt-4o).
Anthropic via AWS Bedrock
- The latest of the Anthropic Claude family of models is accessed through Amazon Bedrock.
- By accessing Claude via AWS Bedrock, our input data remains within our AWS environment. Bedrock uses a private copy of the model, so data is not shared with Anthropic or other model providers and is not used to train or improve the base model.
audEERING
- audEERING is a leader in voice analytics technology. The company’s roots are in academic AI research, with over two decades of expertise in audio analysis.
- Creators of the OpenSMILE Toolkit, a popular open-source audio feature extraction toolkit used globally in voice analysis research.
Data Usage
Our AI tasks operate on inVibe market research study data. This includes:
1. Response Transcripts — The transcribed text of voice response answers.
2. Study Metadata — Information about the inVibe study such as target audience, research objectives, and fielding dates.
3. Answer Audio Recordings — The acoustic audio recordings of voice response answers.
Protecting personal information is paramount. Before any responses are published to our dashboard—and thus becoming available for AI processing—we redact any personal information. This means names, medical institutions, physician details, and any other unique identifiers are removed to ensure respondent privacy.
Data Retention
At inVibe, we adhere to a strict zero-retention policy for data sent to our LLM providers:
- OpenAI: Operates in a zero-retention mode under our enterprise agreement.
- AWS Bedrock: All input/output data is processed within our AWS environment. Neither Anthropic nor AWS retains or reuses the data for any purpose.
- By leveraging AWS Bedrock to access Anthropic’s Claude model, we ensure that no data leaves the AWS infrastructure. You benefit from AWS’s robust commitments to data privacy, encryption, and lifecycle management—further minimizing the risk of unintended data retention.
No provider has the right to store or use inVibe data for training their models.
Evaluating AI Tasks
We apply a structured, multi-step evaluation to ensure high-quality AI outputs. This is done via a combination of automated metrics and human review.
- Automated Metrics
When tasks can be evaluated automatically (i.e., gold standard labels, quantitative metrics) we rely on automated metrics to track AI task performance. - Expert Review
Trained language experts independently review and compare AI-generated outputs side by side. The output is assessed for completeness, accuracy, integrity, organization, formatting, and citation.
See our blog post on validating LLM outputs for more information.
Ethics & Responsible Use
inVibe is fully committed to using AI responsibly and ethically. This commitment is reflected in:
1. Data Privacy
- We redact mentions of personal information before publishing responses to our dashboard and processing by subsequent AI models.
- Our LLM providers operate under strict contractual terms prohibiting data retention or training using our data.
2. Transparency
- AI-based outputs are clearly labeled.
3. Human Oversight
- We run evals for each AI task to ensure we reach an acceptable level of quality for each AI task we enable.
- We measure the performance of new models and approaches to AI tasks before making them available to users.
4. User Autonomy
- Organizations have the option to opt out of generative AI features completely.
- Users can select the model that they prefer for each AI task that they request. Model selection and usage policies can be tailored to meet your unique security or compliance needs.