Qualified is currently leveraging generative AI (GPT 4) via a Large Language Model (LLM) provided by our subprocessor, Open AI. Qualified has a copilot and an autopilot built into our conversations product. Additionally, we have generative capabilities to create customer relevant content, such as offers.
We are incorporating OpenAI into our product offering to facilitate meaningful messaging that our customers can use when corresponding with their potential customers.
As of today, only the conversational product uses Qualified Generative AI and Signals uses Qualified Predictive AI.
Qualified currently does not train models, rather we only tweak the output. The output of the generative AI model is based on the marketing website and pipeline cloud data (data collected within the platform).
Information for each customer org is kept separate. When we make a call to Open AI and its completions API, we are sending the embeddings, question, prompt and getting an output. In essence, the data for a specific customer’s org doesnt mix with data for other orgs to make output better. The data is stored in Qualified’s normal databases and the retention period is going to tie into our existing data deletion policies.
All data we collect in Qualified including the marketing website can be ingested by the AI model.
The data shared with AI is entirely based upon the question asked by the visitor asking the question on the website.
N/A - this type of data is not collected by Qualified.
For our conversational AI product, the risk of inaccurate predictions and recommendations is mitigated significantly by allowing the admin to provide guardrails to the system. As an example, the bot will only recommend outcomes such as Meetings or rep hand-off, if they have been previously identified as relevant. Our predictive AI model, Signals, relies on normalization to ensure an individual activity does not have an outsized impact, to reduce potential impact of mis-attribution.
We currently have the following mitigation techniques in place:
We use a combination of in-house and publicly available models for our various AI applications. For our generative applications, we are currently using GPT 4- pre-trained by Open AI.
Since our AI system is a combination of various models etc, a singular accuracy metric is difficult to showcase. The appropriate metric may vary on factors such as dataset, context and the task diversity.
Our Conversational AI system is transparent, as we are able to provide visibility into what happens from the time a question is submitted, to when an answer is generated.
Additionally, our AI playground allows admins to be able to see potential outputs, where they were derived from, and provide feedback to guide the system.
We have in-house experts who provide visibility into the entire system, as well as rationale for decisions around any assumptions made to create the AI system.
GPT-4 by open AI, a Qualified sub-processor. Your data is NOT used to train, or retained, by Open AI models. Customer messages are encrypted in transit & at rest.
All the sources used to answer questions are provided by the customer. Currently we use publicly available sources like corporate web properties, offline sources like value proposition PDFs as well as other knowledge hubs where internal information is kept.
We have multiple failure states to achieve the optimal user experience. In co-pilot features, we inform the rep that we are unable to provide a response. For our autopilot features, we either ask for clarification questions, or direct them towards support if the question can not be answered.
In the event a handoff is warranted, we allow for the prospect to be able to ask to be connected to a human, in which case- we use our existing routing logic to hand-off to the next available rep. If a rep isn’t online, we show the meeting booker to allow the prospect to book a meeting.
In all features, we allow for the prospect to be able to ask to be connected to a human, in which case- we use our existing routing logic to hand-off to the next available rep. If a rep isn’t online, we show the meeting booker to allow the prospect to book a meeting.
Contextual understanding - the models we use are trained on large data sets which helps differentiate between similar terms