top of page
Search
  • Writer's picturePhilip Smith

I felt prompted to write this...




It's that time again. My analogy is that life is a bit like swimming up a river, from it's mouth, to it's source(s). Sometimes we take a tributary that leads nowhere, we either need to swim back and take a different one, or forge a path through unknown territory to find the next.

I'm at one of those junctures (again), with a clear option ahead. I started out leaving school with no GCSEs worth talking about and not much of an idea of what I wanted to do. Since then, I qualified in Wood Machining, Networking, Computer Systems and Maintenance, and Cloud Technologies. Time and again, I have blown away my own doubt and proved to myself that with work and determination and a willingness to overcome difficult challenges, I can succeed.

Ergo this post. I've been toying around with LLMs, ChatGPT and Prompting for a while now. By no means an expert, but certainly someone that can see how they can be applied to every day challenges that organisations face. This concept couples nicely with a desire I've always had to improve the lives of the general population. I'm not talking about solving world hunger, although I'd give it a go if I could! I'm talking about improving the general everyday experiences people have with organisations that provide a service (or at least claim to).

Everywhere I have worked in the last 5 years all have the same goal. To offer frictionless and dare I say 'enjoyable' experiences. Invoking a thumbs up, instead of a thumbs down. Although I'm going to massively oversimplify the challenge and solution here for brevity, I hope it gives the reader the same feeling I have - that we're on the brink of a technological advancement with AI that could achieve just that. Reliable, Stable, Accurate and Enjoyable experiences, whenever and wherever people need them.

I do not believe AI will replace, or should replace the work people do, instead with the right strategy it can be used to enrich the right experiences, in the right ways. As my Dad used to say "Use the right tool for the right job Son". This applied to his priceless Japanese Steel chisels, for which he could achieve amazing things with - less so after I'd used them as a screwdriver.

The same is true for our existing 'automation' which is claimed to be 'natural' in understanding customers - it falls short in many cases. It wasn't necessarily the right tool. Here are some examples;


  • It doesn't detect whether a person NEEDS to talk to another person for varying reasons, such as age, technical capability, life situations and other nuanced interactions.

  • It doesn't KNOW the customers situation and has so far fallen short of identifying it through the models in use.

  • It is often slow to respond. I don't mean in terms of time to respond to the interaction, I mean time to evolve to the general changing needs of customers.

  • It has only 'shifted the problem elsewhere', which is so often the driving factor behind automation, usually resulting in unhappy customers not getting the solutions they need.

  • In many cases across the different techniques used to service customers, it has poor accuracy, limited and sometimes lengthy options. People often procrastinate prior to actually making contact to avoid the emotional drain it causes them.

  • Finally, whilst the facia has been designed for the interaction, whether it's voice, video, chat, bot or pigeon, the back ends that service the front ends have been neglected and they cannot keep up with the demand.


Where am I going with this?

I've decided to try and help tackle the issue, independently, through a set of products currently in development and a consultancy offering. Keep an eye out on my profile for information. For now though, I wanted to share the beginnings of what I started trying a few months back on OpenAI's Completion API. This isn't going to be an extensive 'how to prompt for Long Language Models (LLMs)' type post. Just a quick thought exercise to challenge current 'Customer Experience Transformation' thinking that I have encountered across so many organisations.

Whilst I won't say that large scale, big bang, all up front design transformation can't succeed. I will say that I'm not wedded to the approach I've seen being taken across many, if not most organisations.

This is a huge topic, here I only intend to cover a tiny portion - where to start. Experimentation is the key and this entirely over simplified example could help cut through the enormity of the challenge some organisations face. This is not intended to solve all of the challenges, such as data silos, contact cohorts or pre-existing product offerings which can't be changed due to some level of regulation. Instead, this is a simple "hello world", that could start as a seed and from there iterate to something that can start to solve the bigger challenges.

Here is an example of a prompt:

I am a chipper and helpful customer service bot, when I receive a sentence, I will attempt to return a compliment like in the example and confirm what the customer is asking me to help them with.

Example:
Sentence: I'd like to book a service for my car.

Answer: Hi, I hope you're having a great day, absolutely, what is your car registration?

Sentence: I have a broken tooth please can you arrange an appointment for me?

Answer:

Without much context, let's see what the answer given by Open AI Completions is:

Absolutely, I'm glad you're taking care of your dental health! Can you please provide me with a few dates that work for you so I can book an appointment?

Granted, this isn't a very mature example, but with little to no context, OpenAI has responded with something useful. In a system that expects certain contexts, data structures and outcomes however this example won't go very far. Watch what happens when you start to apply a more declarative approach to the prompt, schema and categories with which it can respond in:

When I receive the sentence below, I will attempt to understand the correct category and product from the Category and Product list below. I use the Context list to insure my responses are in line with the topic of the sentence. Then I will return JSON formatted responses like in the Example below, with the category and the probability. When I get a more complex sentence, I will return the second most probable category, as per Example2. If I cannot understand the Category or Product, or both, I will ask for more information, as shown in Example3.

I do not attempt to modify the original sentence. I only send a response based on whether I can categorise it or not.

Context: Banking, Financial Services, Mortgages, Insurance
Categories: Lost Card, Stolen Card, Mortgage Advice, Mortgage Balance, Account Balance, Change of Address, Make a Payment, Credit Card, Bereavement, Savings
Products: MyBank Instant Saver, MyBank First Mortgage, MyBank Black Credit Card, MyBank Platinum Credit Card, MyBank Current Account

Example: 
Sentence: I think my card fell down the back of the sofa
{"Categories" [{ "0": "Lost Card", "Probability": 0.93 }]}

Example2:
Sentence: I have a MyBank Black Credit Card but I can't find it anywhere!
{"Categories" [{ "0": "Lost Card", "Probability": 0.93 }, { "1": "Credit Card": "Probability": 0.89 }]} 

Example3:
Sentence: so can you help me then?
{"Categories": [], "InsufficientInformation": "I'm sorry I wasn't able to understand your request, please could you try again?"}

Here we have a way more declarative prompt. In no way am I suggesting that this will work with a 100% accuracy and I have not tested it to the degree needed by a real customer facing service. It does however in my opinion spark an intrigue into how it can be improved and possibly integrated into an organisations first line of 'triage' needed to help identify why the customer is contacting them.

Let's see what this produces:

Sentence 1: Here, we can see the customer isn't providing the information needed. In this case we can ask the customer conversationally to provide more information. This can happen when a customer assumes they're speaking to a person. You can also see the response is now successfully formatted in JSON.


Sentence: Oh yes, Hi, can you help me please?
{"Categories": [], "Products": [], "InsufficientInformation": "I'm sorry I wasn't able to understand your request, please could you try again?"}

Sentence 2: Here the customer is telling the service about a bereavement, it is super important in special cases to treat customers appropriately. In this case (and my opinion), this customer would be sent straight to a real person, trained in dealing with sensitive situations. Notice how high the probability is.


Sentence: My husband had a Black Credit Card and he passed away last week.
{"Categories": [{"0": "Bereavement", "Probability": 0.98}], "Products":[{ "0": "MyBank Black Credit Card" }]}

Sentence 3: Here, the customer has given information about an experience they're having with buying a product. In this case, the system would not yet know what product and therefore could be designed to provide this information to a 'secondary triage' that could ask, what MyBank product they're attempting to use. As with most 'automation' the system receiving the context, needs to have the full intent before it can complete any action, in this case the product is needed so they can be connected to the right service.


Sentence: I'm trying to buy a car but the payment has just been declined, can you help?
{"Categories": [{"0": "Make a Payment", "Probability": 0.93},{"1": "Account Balance", "Probability": 0.89}], "Products": [ ]}

Sentence 4: Ever one for the joke and no stranger to the streets, you can see here that it's done a fine job of translating a more colloquial tongue :). Granted, if this was via voice, you're transcription services needs to be on point too.


Sentence: Yeh so sum eediot just muged me innit, man dem took ma fuckin papers and evrythin!
{"Categories" [{ "0": "Stolen Card", "Probability": 0.95 }]}

Disclaimer: Do not use this in your production systems, it is by no means meant to solve a complex set of options and challenges and probably wouldn't fall into 'certified usage' as part of any regulated industry!

More to the point however, you can see that even a basic prompt such as this, without much thought or construction (it was one of the first I tried), can unlock a massive amount of efficiency in triaging what a customer needs.

Integrating and iterating this further could lead to shorter wait and call times, effectively routing people to where they need to be in order to service them on the first contact. There are a plethora of benefits that could be realised through starting small, working with a single cohort or customer type to start, then broadening the capability after:


  • Quickly getting a customer to their intended outcome, reducing cost and increasing efficiency.

  • Reducing load from the wrong contacts routed to the wrong people without the skillset needed to service the request.

  • Identifying special cases that need handling in a more delicate way.

  • Understanding the intent, even with the broadness of language usage and accent (provided your transcription services is good if using voice).

  • Massively reducing the engineering effort required to create good journeys, services and experiences.

  • Mostly reliable data schemas that can be integrated into the systems you're used to via JSON - organisations should insure they thoroughly test these, and operate exceptional error handling for those that fall through the cracks.

  • Simplification, both for the teams that build the services and their consumers. Let's face it, nobody likes to contact an organisation and have to jump through hoops - or learn shortcuts to bypass them.

  • Happier employees, happier customers - the benefit of which is hard to quantify.


As I stated, in no way is this meant to be an extensive tutorial on how to Prompt. Neither is it an all round solution or approach. It is intended only to plant a seed.


That it is simple to begin experimenting (this literally cost less than $0.01), then learning and iterating through options for how to begin solving more challenging issues.


For sure, integration here is the harder part, but OpenAI and other LLMs have made it easier to do the first part, which often causes no end of challenges and headaches for the larger more complex organisations.

As I start wading through the next 'tributary' in my journey, and you find yourself in a similar situation, exploring the new buzz topics of Generative AI, LLMs and GPT like technologies, I'd relish the chance to discuss them at length. If you enjoyed reading this post, don't hesitate to contact me via LinkedIn.

0 comments

Comentários


bottom of page