It was in the 1950 treatise, “Computing Machinery and Intelligence” that Alan Turing first laid down the fundamentals of what has since become known as the ‘Turing test’. In simplified terms a Turing test is defined as when a human cannot distinguish whether it is engaged in a conversation with a machine or with a real person.
Created at M.I.T by Joseph Weizenbaum; ELIZA was released in 1966 as the first (conversational dialogue) program to supposedly satisfy the Turing Test, but Weizenbaum himself never declared ELIZA to be intelligent, but rather more deceptive, only offering the illusion of intelligence.
Almost 70 years later, we see significant rise in the introduction of voice activated technologies and digital assistants including releases from Amazon, Google, Microsoft, Apple and Facebook.
Intelligence is the key factor
The rise in popularity for such technology among consumers combined with significant advances in voice recognition, machine learning (ML) and neural networks (deep learning), has led companies to embrace these technologies for the potential business benefits. Such technologies include virtual assistants, digital FAQs and portals to help customers resolve their queries. Currently, without prior programming or understanding, whilst in dialogue, computers struggle with colloquialisms and invented language. Hence machine learning, neural networks et al. are seen to offer the governing optimization potential for these technologies.
AI, ML and neural networks have far reaching future potential, however at the same time, the random, or, that which the probability of it can be refined to a greater certainty, is not the aim of a company when implementing a chatbot or digital assistant as part of the customer experience strategy. This in some way alludes to the real potential applications, which themselves require large data sets being crunched.
But the key driver behind all of this is ‘intelligence’, which, in human terms cannot always be answered by machine driven, linear logic decision trees that often don’t represent the way conversation develops.
So, the apparent need for human-to-human interaction isn’t going away just yet and human representatives still make up a significant and important escalation layer within the overall customer experience. Clearly, some businesses have implemented chat bot as a mitigation layer in the attempt to allow customers to digitally self-serve.
What do customers prefer?
Many companies believe that Chat Bots and IVR are alternative technologies. Another debate concludes them to be complementary technologies, and I tend to agree here. The input is essentially the same – language. It being just the delivery that varies. It can be either text or speech, and we can all agree there is room for both options.
From an experience perspective, customers are not all the same, each have preferences and so ultimately good customer service is in some way about embracing this variety of preferences. When a company may choose not to, they fully understand why not and the expectation is understood from a customer’s perspective from the get-go. For example, let users know that the chat bot is a machine, not a human from the beginning. This among other reasons contributes to the failed implementation of new technology, others include:
- It was not required in the first place and doesn’t solve an existing problem
- Any programming is only as thorough as the programmer
- Being non-linear is currently very complex to achieve cost effectively (but becoming less so)
- Sledgehammer to crack a nut – sometimes less is just more
- Re-inventing the wheel is never necessary!
AI, ML, Neural networks and digital technologies are not beneficial merely by their existence, but rather by their application which, as previous alluded to, infers implementation as key. Machine learning is about inputs and outputs with a layer of knowledge extraction between – arguably how the learning is applied. It has often been stated, that the outputs are only as good as the inputs – without the inputs there isn’t any learning!
Postal Service and Machine Learning
To give a real-world solution already provided by ML; a nice example that blends a little of the abstract to the concrete in computing terms. How the postal service sorts the mail is achieved through machine learning (but as with any function a computer simply makes a comparison). [This example refers to the Western alphabet and number system, A – Z, 0 – 9, but works the same way regardless].
Having just typed them out, already it’s appreciable the computer has a concrete understanding of these symbols (and can output them in a variety of fonts). But what if it reads an address on one envelope written in block capitals by a child and another with joined up handwriting scrawled in a hurry and so on?
The computer compares symbols against its programmed understanding of the true symbol, but not only this; through an on-going process of recognition and refinement it continually reduces its own margin of error. This is achieved through statistical probabilities, not only does the machine compare an ‘A’ input into it against its understanding of the forms of an ‘A’, it also checks that it is also not any of the other letters/numbers and so refines its understanding (somewhat by process of elimination – “reductio ad absurdam”).
Essentially our brains do the exact same when we read, particularly bad hand writing. Instead of articulating all that is eliminated, we focus on the output and here we can also apply context the computer need not (and as such it thinks nothing of the handwriting at all!). For U.K. postage this information is referenced against the postcode system, meaning the machine need not read the whole address, and being pre-defined with logic may recognise the letter number sequence of the postcode to again reduce margin for error.
This is all very interesting and academic, but what does it have to do with customer experience?
As a customer when I have a service issue, my preference is nothing more than the issue be resolved whether that is by a machine or a human. Machine Learning or AI is only good for the solutions it can provide to problems understood, which as stated requires large data sets (whether gathered overtime or batch loaded).
The prime differentiator of companies that offer superior customer experience is the way they employ and deploy knowledge across their entire business operations, whether that be to customer facing self-service portals, via chatbots or in-person over the phone. Having a centralized and unified distribution of knowledge across all channels reduces the margin for information error offers transparency and consistency of information, and from a customer perspective this reinforces CSAT and NPS.
References: https://en.wikipedia.org/wiki/ELIZA
The author, Oliver Newton is a Digital & Customer Experience expert at KMS lighthouse