If you are a tech fan or an IT professional, you know that Artificial Intelligence is one of the trendiest topics in tech right now. Discussions, different opinions, apocalyptic scenarios like in Hollywood movies, positive implementation of AI for enterprises. In other words, everyone is nowadays talking about AI in all different sauces. But, have you ever asked yourself what exactly AI is? And what exactly AI is not (yet)?
Well, in this article, we want to focus our attention not only on AI but also on cognitive computing and chatbots, IBM Watson chatbots specifically.
What is Artificial Intelligence
John McCarthy first coined the term artificial intelligence in 1956 when he invited a group of researchers from a variety of disciplines including language simulation, neuron nets, complexity theory and more to a summer workshop called the Dartmouth Summer Research Project on Artificial Intelligence to discuss what would ultimately become the field of AI. At that time, the researchers came together to clarify and develop the concepts around “thinking machines”.
Today, modern dictionary definitions talk about AI as a sub-field of computer science and how machines can imitate human intelligence. For example, the English Oxford Dictionary recites: “AI is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
Despite the definitions, instead of talking about artificial intelligence many people describe the current wave of AI innovation with terms like cognitive computing or smart applications, intelligent application, predictive applications etc. Despite the term issues, artificial intelligence is essential for and in, among others, information management, healthcare, life sciences, data analysis, digital transformation, security, smart building technologies, robotics and so on.
What is the difference between AI and cognitive computing?
Cognitive computing might be closer to the idea of artificial intelligence. What’s the difference? As VDC Research IoT analyst Steve Hoffenberg explains, an AI and a cognitive computing system would approach a data-intensive task quite differently.
Imagine that both an AI and a cognitive system had to analyse a huge database of medical records and journal articles to determine treatment for a patient. “In an artificial intelligence system,” says Hoffenberg, “the system would have told the doctor which course of action to take based on its analysis. In cognitive computing, the system provides information to help the doctor decide.”
Cognitive systems are designed to solve problems the way humans solve problems, by thinking, reasoning and remembering. As Saffron Technology explains, this approach gives cognitive computing systems an advantage allowing them to “learn and adapt as new data arrives” and to “explore and uncover things you would never know to ask about.” In other words, cognitive computing is considered the ability of computers to stimulate and complement humans’ cognitive abilities of decision making. But cognitive computing is not responsible for making a decision for humans. On the other hand, artificial intelligence is not intended to mimic human thoughts and processes, but to solve a problem through the use of the best possible algorithm. AI is responsible for making decisions, thus minimizing the role of humans.
In this context, where do Chatbots fit in?
Chatbots are computer programs that mimic conversation with people using cognitive computing. This means that chatbots are learning from human input, not able to make decisions, learn or think by themselves.
A few years ago when chatbots started being popular, there was a lot of talk around what a chatbot actually was. Natural language processing and machine learning techniques, some of the more advanced conversational applications wanted to differentiate themselves from their competition by calling themselves “virtual assistants.” This implied that they were more powerful than existing chatbots, being able to cover a wider range of topics.
However, the market appeared not to care that much about the different names, as far as chatbots or other applications were able to solve the right problems. This is why many of these different terms for bots became synonymous with each other. The thing they all have in common is that they are all able to have a conversation with you.
What is important to have in mind when talking about chatbots is that, based on the purpose they were created for, there are many types of chatbots that give different end-user experiences:
1. Support chatbots
Support chatbots are built to master a single domain, like knowledge about a company. Support chatbots need to have personality, multi-turn capability, and context awareness. They should be able to walk a user through any major business processes and answer a wide range of FAQ-type questions. Speech is an optional feature, and not a necessity since users typically have sat down at a desktop and are ready to figure out their solution.
2. Skills chatbots
Skills chatbots are typically more single-turn-type bots that do not require a lot of contextual awareness. They have set commands that are intended to make life easier: “Turn on my living room lights,” for example. Speech functionality is recommended for this type of chatbot. They should be able to follow commands quickly so that your users can multitask while engaging with the bot.
3. Assistant chatbots
Assistant chatbots are more or less a middle ground between support and skills bots. They work best when they know a little bit about a variety of topics. Many people envision these bots will someday become navigators of all other bots. Want to pay a bill? Ask your assistant bot to talk to the support bot for your bank. Assistant chatbots need to be conversational and respond to anything while being as entertaining as possible. Siri is a good, current example. When building an assistant chatbot, it is important to make it as obvious as possible how the bot is trained. The range of questions a user might ask is large, so making sure you have adequate coverage is going to be the most difficult factor.
Our expert talks about chatbots and IBM Watson
Tim Van der Hoek is our developer and he is specifically specialized in chatbots. His opinion about chatbots?
“Chatbots are dumber than what people think”
During a workshop in Amsterdam this year about IBM Watson, the chatbot expert explained that chatbots are should not be considered artificial intelligence, since AI bots are able to learn by themselves, without the human input. That’s why chatbots are different.
Specifically, chatbots from IBM Watson need help and input from humans. A chatbot can only answer questions that are filled in advance in the admin panel by an administrator of the chatbot.
When you are making a bot, the admin has to create a “dialog”, meaning a conversation within the chatbot who simulates a conversation with an end user. The admin has to predict what people or users will ask to the bot and fill in the information related to some ‘triggers’. All the questions not related to the established triggers will not be answered by the chatbot. That is what makes IBM Watson chatbots different from AI, even though most people think chatbots are AI.
Nowadays, a lot of businesses that have support functionalities chose to create chatbots on their website, so they don’t have to hire people to answer FAQ or other simple and common questions. Moreover, the enterprise uses of AI to improve employees’ productivity for business purposes is already a topic of discussion. According to Tim, also chatbots will evolve in AI with access to big data in the near future. But which will be the consequences when the human input is not needed anymore?