fbpx

Investment in AI: Setting Parameters with LLMs

Artificial intelligence (AI) offers the potential to improve how healthcare listens to the customer voice – building strategy around, not only predicting outcomes but prescribing them alongside human listening. Investment in AI is high but often misunderstood because of the lack of knowledge and nuanced metrics and data associated with AI, such as parameters.  

Much of this technological innovation is growing alongside AI tools and solutions. Generative AI, as witnessed by the popularity of platforms like ChatGPT, Jasper.ai, Midjourney, or DALL-E, is now a household name for a majority of industries. While generative AI uses machine learning (ML) algorithms to create new content from text, code, or images, it also is a type of large language model. 

As the world (and the healthcare industry) understands this evolution, there is excitement around the potential impact – but businesses must remain aware of why and how data is leveraged when determining investment in AI. 

Driving Specification 

Parameters – or the numerical values trained that determine a machine’s behavior and complexity – are found in large language models (LLM). There are costs to not investing in these models and ones that – without planning, strategy, and confidence – affect the customer and minimize the return. 

LLMs are ML models that use deep learning techniques (like artificial neural networks) and natural language processes to generate new content and understand existing content with classifications from data labels in large sets of data. As data is filtered into the model, the more able the model can generate a fitting response and more reliably respond (or talk to) the prompt, such as a question or set of rules-based classifiers. In short, LLMs teach the AI system how to talk and what to do. 

When viewing LLMs by parameter count, the larger is usually seen as the better – however – it is important to recognize the needs of the model, or the use cases being used to train the AI and how high the costs can be to run an unspecified model (high parameters) in both speed and money. 

Putting Up Parameters 

Most of the time, an LLM’s full depth of parameters won’t be necessary if the use case calls for a highly specific function. Parameter adjustment based on prompt optimization can minimize the overfitting (when new data fails in a model where use cases performed well) or underfitting (when neither use case nor new data performs well) of an LLM to improve reliability and continued performance. In other words, when looking for an intentional set of data, the output of the model is only as good as its input.  

This can be demonstrated by understanding the confusion matrix. This matrix is an algorithmic table that reflects performance outcome potentials, or it is used to assess how AI generates a predicted answer. 

  • True-positive 
  • False-positive 
  • False-negative 
  • True-negative 

Beyond the troubling reality of incorrect classifications, it can cause data insights to be accepted that are actually analytical hallucinations, stagnate organizational customer experience (CX), or even inflate inherent biases of the model’s code (such as the Dunning-Kruger Effect). When heavy amounts of data are inputted into a model with wide parameters, the higher the probability classifications would be labeled as statistically true or false when they are, in reality, incorrectly predicted and misrepresented. 

Fine-tuning the model – or lowering the parameter count intentionally – can place a safeguard on not allowing a false-positive to slip through. Highly specialized data creates more opportunities to enhance the model’s feedback analysis. The more efficient the model runs, the more likely it is to surface unsupervised topics or issues that occur in the CX for which aren’t being actively monitored. Industry-specific LLMs, like healthcare models Authenticx employs, listen at scale to unsolicited feedback with human evaluation to review and validate the machine outputs. 

Reliability and Cost 

Commercial or large-scale (non-industrial) LLMs are used widely – meaning, yes – the data is trained on billions upon billions of informational subdata. However, the data is collected from vast sources that can be unspecified, biased, and even untrue. This means the output is taking in unverified voices and information (not specified by specific use cases) to create your solution.  According to Jon Krohn, Chief Data Scientist at Nebula, fine-tuning LLM mechanisms with more intentional training data can reduce model size without losing accuracy – saving time and money.  

Maintaining a model’s cost is usually dependent on the size of the server and the infrastructure of it. To better affect change and to impact the experience for customers and organizations, an LLM – even one with billions of parameters less than the large-scale model – could be utilized as a trusted tool that can increase reliability for specific industries while bringing down the actual dollar costs involved. 

4 Ways Artificial Intelligence is Helping Healthcare Listen

Enhance business outcomes support by listening with confidence to aggregate topics, themes, and patterns from millions of conversations.

4 Ways Artificial Intelligence is Helping Healthcare | Authenticx

Business Strategy Balanced with Automation 

A balance of automation is essential for effective and efficient AI performance. Unlocking reliable analysis needs to go beyond the sole implementation of an AI solution. This is where the voice of the customer (and how it guides organizational decision-making) gets its shoes. Automation evaluation functions, like autoscoring, enable organizations to consider quantitative and qualitative insights to improve contact center performance, identify training and onboarding opportunities, and gain visibility into quality assurance (QA). When contextual insights are derived from models built with the type of customer in mind – Authenticx uses specific healthcare case data per client needs – the better the next input will be for a similar prompt, in turn allowing leaders and analysts to dive deeper into customer conversations. 

Automation running on prompted criteria has the ability to analyze data while alleviating the burden of resource constraints. The outcome of this type of evaluative tool: 

  • Gives confidence to insights by utilizing an extensive and representative set of interaction data 
  • Assesses performance and conversation data to locate opportunities to improve 
  • Preserves time and capacity to empower team leaders to focus on growth and performance 
  • Improves understanding of brand perception and standards on compliance initiatives 
  • Enumerates agent interaction audits to comply with regulations and quality requirements, achieve metric goals, and generate CX insights 

The Return of a Valuable Investment 

Imagine LLMs as an employee – wanting to hire the best algorithm to effectively get the job done is understandable. Business objectives center around operational improvements and efficiencies, while customer outcomes rely on listening to and responding to their needs.  

With new innovations in AI, like industry-specific models, it allows more data sources to be tapped into with automation. However, the investment into the AI and into the training of the AI is critical; without the investment, employed models will fall short and objectives will not be met.  

It’s [LLMs] all going to be trained or augmented with the conversations and data we’ve gotten historically, which is a huge event and it’s not general – it’s built out to serve healthcare-specific problems. We don’t see it as a silver bullet to solve all problems. 

Eric Prugh, Authenticx Chief Product Officer

The Authenticx Difference 

The differentiator between Authenticx AI models and commercial or large-scale ones is that Authenticx can be more highly specified using industry data in healthcare-related prompts. Training models with over 200 million healthcare conversations allow the AI to be harmonized with human evaluation. Having people with expertise in healthcare, social work, and education listening to these sensitive and distinct conversations helps verify and inform the AI models to support a nuanced and complex industry. Authenticx creates an ear-to-ground experience that ensures machine learning models and human evaluation are adept at understanding what is being said in conversations to deliver the right data set for contextual and actionable insights. 

Authenticx employs LLMs and other AI models, specifically for healthcare.  

  • The Conversation Summary model is an LLM comprised of over 500 million parameters built from healthcare conversations.  
  • The HIPAA model is a more specified LLM subset, having fewer parameters to craft more reliable insights for clients monitoring HIPAA-related interactions and topics.  
  • The Eddy Effect™ is the only commercially available customer friction model directly tied to ROI, helping identify patient frustration, training opportunities, and even therapy discontinuation. 

Other models are used to audit adverse events, Medicare and Medicaid, sentiment analysis, and agent quality. With more in the works, the more specified each will be to better serve the healthcare industry. Our models will continue to be crafted with healthcare as a focus because the gaps in healthcare are ripe with nuanced challenges, regulations, and highly specified contexts. 

To improve the effectiveness of LLMs, having reliable metrics can highlight areas where the model is underperforming. It’s important to establish governance on how models are monitored and tuned. Without it, organizations would be acting on unverified insights that wouldn’t reliably predict outcomes based on their representative customer population. 

Building Blocks with Intent and Context 

Advancing deliberate quality in deep learning techniques (LLMs) and AI practices must take intentional precedence over the public-facing selling of technology misunderstandings. Although bigger usually means better, with AI, it means finding outputs sprawled across a confusion matrix and the deliverance of a diminishing return on the investment in AI. This demands a mature and sophisticated approach to developing AI to push the boundaries of generative LLMs to primarily solve problems for any industry.  

Organizations and industries that utilize LLMs and AI that navigate using unspecified data sets will find it challenging to unlock their top priorities. Commercial AI with billions of parameters can only uncover so much breadth and depth in customer insights. If an organization is in a specific industry, live in it. Failing to find a return that impacts change is ignoring the most important value of all: the voice of the customer.  

Authenticx in Action | On-Demand Video

Intelligent, immersive, and personalized data-storytelling.

Leverage conversational intelligence to capture all customer interactions in one platform.


About Authenticx

Authenticx was founded to analyze and activate customer interaction data at scale. Why? We wanted to reveal transformational opportunities in healthcare. We are on a mission to help humans understand humans. With a combined 100+ years of leadership experience in pharma, payer, and healthcare organizations, we know first-hand the challenges and opportunities that our clients face because we’ve been in your shoes.

Want to learn more? Contact us!

Or connect with us on social! LinkedIn | Facebook | Twitter | Instagram | YouTube

Get the latest customer insights content delivered straight to your inbox
Copy link
Powered by Social Snap