Alternative Text Neil Murphy | 24 April 2024 |

AI For Business – Let’s get on with it…

Computer generated image illustrating natural language chatbot.

AI is here to stay. You’re already using it whether you like it or not. So how do you use the power of AI to work for you? Read on as we help you dip your toes in the AI ocean. 

One of the few compensations for the increasingly greying hair is being able to see the latest “big thing” in tech, in the context of all the previous “big things”. It helps you filter out some of the noise you find at this point in the Hype Cycle.  

The noise around AI is currently deafening. It’s everywhere in the media, both mainstream and social. Commentators sit on opposing sides of the fence—AI is a force for good, helping scientists solve the latest medical/security/technological issues in next-to-no time; AI is a force for evil, threatening our safety, taking over the world, and plotting the destruction of humanity. 

It’s true that amazing discoveries are already being made with AI and that some individuals and organisations are using it for less-than-honest means. 

It’s also true that AI isn’t going anywhere and is already widely used in many industries. Anyone using a computer or smartphone will already have encountered AI without knowing it. 

The commentators mentioned earlier tend to focus on the global impact that AI could potentially have, but we recognise that AI is a thing of scale. But, for all the discussion around AI becoming as intelligent as humans, right now that is purely theoretical. Artificial General Intelligence (AGI) simply doesn’t exist. What we do have, however, is discriminative AI and generative AI. More about those later. 

At some point, yes, it may have a global-scale impact, but it can also be used at a much smaller scale, closer to the heart of your business strategy. Those applications are easy to implement and are available today. 

You could, if you choose, wait to see how other businesses—your competitors even—adopt AI, but that could be a costly mistake. The speed of development is rapid, and it could be very easy to be left behind in the blink of an eye. 

To help you navigate this new and expanding landscape, we’ve put together a summary of things that will help you understand what’s available, what it can do for you, and how we think you should start your journey. 

What are the key AI terms you should know?

At DeeperThanBlue, our focus is on Generative AI (or GenAI), but what does that mean, and how does it fit with the rest of the AI landscape?  

To understand how GenAI solutions operate, and to compare offerings between vendors, there are a few terms you should be aware of. 

Discriminative AI Discriminative AI is the older technology which focuses on learning the boundary between different classes or categories within the data. Instead of generating new data, discriminative models aim to classify or predict the label for an existing set of data.

For example, classification of cancer MRI and CT scans. Algorithms are trained on large sets of labelled data to allow them to identify cancerous and non-cancerous tissue in new medical images.

Generative AI The key word is ‘Generative.’ This type of AI can generate new content—such as text, images, video, and code—by using patterns it has learned from training on vast, and often public, datasets with machine learning (ML) techniques.
Foundation Models Foundation models are trained on extensive amounts of unstructured, unlabelled data, from various sources, using deep learning techniques. You may sometimes hear them called Base Models.

These are versatile models which can be used for various tasks or adapted to specific needs through fine-tuning. Foundation models are not limited to processing language; they can also handle tasks related to images, audio, and other types of data.

Examples include models like GPT (for text), DALL-E (for images), and CLIP (for both images and text). These foundation models require significant resources to create and are currently the preserve of massive technology companies.

Large language models This can get a little confusing in that LLMs are a class of foundation model, but are specifically designed around tasks involving understanding, generating and working with human language. These models are trained on extensive collections of text data, enabling them to perform tasks such as language translation, question answering, text generation, and more.
GPT (Generative Pre-trained Transformer) A Generative Pre-trained Transformer is a prime example of an LLM. You might use ChatGPT to interact with the LLM.
Fine tuning Fine tuning is the process of adapting a pre-trained foundation model to perform better in a specific task. This needs a short period of training on a labelled data set, which is much smaller than the data set the model was initially trained on.

The additional training allows the model to learn and adapt to the nuances, terminology, and specific patterns found in the smaller data set. This would allow a specific “voice” to be applied to responses.

For example, a model to generate product descriptions for an e-commerce site. Starting with a LLM like GPT-3, we would pass training data with a series of input–output pairs that allow the LLM to learn and respond appropriately. This could then be used to generate new descriptions in line with the preferred use of language and tone.

Adapting an LLM with fine tuning might not be the best option, it could prove complex and expensive. A widely used alternative is RAG.

Retrieval Augmented Generation (RAG) Retrieval Augmented Generation (RAG) is a process that allows additional reference information to be passed into an LLM as part of the user prompt.

For example, an internal document repository, or an external website could be added to a request to ensure that the answer returned to the user is based on the latest data. RAG also provides a mechanism to include specific external sources as reference material.

Hallucination Hallucination is the term given to incorrect responses provided by an LLM, but with such confidence that they appear to be true. They can be caused by poor quality training data, which may be out of date, biased or incomplete.

AI in the here and now

In time, AI will impact all aspects of your business, but that won’t happen overnight. Many of you will want to build confidence in the technology before deploying it more widely.   

We have identified two areas of business which would be suitable test spaces for implementing AI solutions — Expert Support and Customer Experience. Both deal with individuals engaging with your organisation and hoping to have a positive experience while completing a task.  

Information at the experts’ fingertips

You employ experts for their knowledge and experience, but even the most experienced expert needs a helping hand from time to time. Gen AI is there to support decision-making. A well-engineered AI chatbot provides a very efficient mechanism to place the necessary information into the hands of expert staff who need to respond quickly. AI could deliver a summary of prior customer interactions, identify the correct technical information required or condense large documents into actionable steps. 

Providing a seamless customer experience

The interaction between your customers and your organisation should feel natural. We’ve all had the frustration of trying to engage with a company and all you want to do is speak to a human being. The benefit that AI has here is that it can naturally triage enquiries and accelerate customers towards a satisfactory resolution, whether this is in support of a well-defined commercial interaction, a support question or an information query. 

The commercial benefit of a chatbot handling these customer interactions initially comes from the cost avoidance in providing support staff. However, the AI driven chatbot provides the opportunity to deliver a better level of service, rather than customers simply hanging on for a helping human and becoming increasingly frustrated.  

A well-built AI-driven chatbot will answer calls quickly but will also gather information about the call, simplify and summarise the details ahead of putting the customer straight into the hands of the customer service agent. 

Obviously, the end-user experience needs to be positive. The interaction with the organisation needs to be smooth and efficient. A humanlike interaction which deals with language and dialect, and which can adapt to less than precise queries is required. The chatbot should simplify and summarise information, removing the need for the user to leave the conversation.  

Where the system hits a limitation, or the customer runs out of patience, then a handoff to a live agent or a voice call should be an easy option. Ideally, the user should regard the chatbot as a complementary addition, rather than an alternative.  

Of the two, we’d suggest starting with Expert Support, the key reason being that the users are your staff, rather than your customers.  


As with any IT system, we need to be able to trust the information it provides. The “Generative” nature of these AI tools is such that information may not always be presented in a consistent form. Type the same query into an app like ChatGPT and the wording of the response will be different each time. 

While responses may differ, we do need to be confident that the information we are served is accurate and that the questions we ask are being answered appropriately. 

Answers are created based on the data the Foundation Models and LLMs are trained on. We need to be sure that no bias has been introduced based on that data, that private or copyrighted data is not included in the source information, and that hate speech, profanity or discriminatory sentiment has not been included in training those models.  

It would be very embarrassing for a chatbot to start swearing, but it would be very expensive if the regulators decide that we breached GDPR, or that an AI-driven process is found to be discriminative.  

Transparency, training, and testing is key to instilling confidence. Some models have been trained on moderated and curated data. Hate speech, profanity and duplication have been removed along with copyrighted data and any personal information. This may be considered overkill for many use cases, but it’s important to understand how the model was trained and to be sure regular testing to ensure it behaves appropriately.  

Security & Governance

For many organisations, security is perhaps the area of AI that raises the most concerns. Many questions come to mind, and it is perfectly natural to be cautious. 

  • Can we be sure that interactions with chatbots and other AI systems are secure? 
  • If we type a search into an LLM like ChatGPT which relates to a business process, what happens to that information once we hit return? 
  • Will the information we send to the LLM be freely available to the public—or worse, the competition? 
  • Conversely, could our future interactions with the LLM be made better if the information was used for further training of the model? 
  • Could our competition also benefit from this additional training? 
  • If we use private data for prompt engineering, fine-tuning or RAG, can we be sure that data is secured?  

These questions will help to drive the design of any AI solution, selection of platform models and deployment models, but you need to ask!  

Other considerations relate to governance. Do we want all our information to be available to all our employees? There is the prospect of adopting Role-Based Access Control, but what’s the process for requesting and approving access? Will that process scale?  

Finally, what’s the audit process? Are we confident that everything and everyone is behaving how they should?  

These are all valid and reasonable questions, and for this reason, we recommend a staged approach to AI adoption, starting with manageable and easily monitorable processes, where deployments can be reliably tested and proven. Once confidence has grown, more ambitious projects can begin. 


“I’m interested in implementing AI into my business processes, but how much is it going to cost me?” 

A fair question, but the answer is, unfortunately, not straightforward.  

AI solutions often consist of several components and the cost model can vary significantly between SaaS LLM providers and chat-bot vendors. Pricing needs to be clear, transparent and predictable. We need to make sure that the value delivered stays in line with costs.  

Costs can come from many sources, the chatbot you use may have a licensing cost, and the LLM used will charge based on “Tokens” used in requests, and this may vary depending on where and how that LLM is hosted. If we fine-tune an LLM then the hosting costs will increase further, and using RAG may increase the transaction costs. Then, if you decide to host your own model, then initial set-up costs may be significant.  

Getting on with it. The DTB approach.

We are building our AI-driven Expert Support solutions using the IBM WatsonX toolset. Other platforms are available, but the IBM offering addresses our immediate concerns around, trust, risk and security. The cost model is predictable, and our delivery model is consistent.  

We can deliver a proof of concept in 10 days, and a live production service in around a month with options to deliver via app or web browser.  

The initial user community in this case comprises internal stakeholders, but the model can quickly be adapted to external customers once the service is embedded, and the initial project is successful.  

If this introduction to AI has piqued your interest, we’d be happy to discuss your specific requirements. Give us a call or drop us a line, and together we’ll get AI working to your advantage. 

Related Articles

These might interest you

AI, Analytics - 11 January 2022

AI in the retail world

Artificial intelligence isn’t all about chatbots and removing real humans from customer service. AI’s place in business is growing fast, Read More
AI, Analytics - 24 February 2021

Climbing the ladder to AI

AI enables you to develop new ideas, gain better insights and improve business decision making. But it isn’t simply a Read More