AI at your fingertips with Microsoft Azure Cognitive Services & Azure Machine Learning

Contributed by Stephane Eyskens |  Azure MVP

Stephane Eyskens 

is a freelance Azure & Office 365 architect with an interest in Data Science & NLP in general, hence an attentive look at Azure Cognitive Services.  He is a frequent speaker at local & international conferences. You can follow him on Twitter @stephaneeyskens and/or follow his blog https://stephaneeyskens.wordpress.com/

Today, Bots & more particularly Chatbots are on every lip! Why this buzz? The answer is very easy: AI has become mainstream thanks to vendors such as Microsoft, IBM and others. Chatbots make use of computational linguistics behind the scenes, not a new concept though, since Alan Turing was already working on that in the nineteen-fifties! So, what has changed in the meantime, why do we suddenly reach a new paradigm? Resources & Data are the answers as today, the amount of available information & hardware capabilities have increased dramatically.

Data is everywhere and is key to any AI process! It’s about the same for natural languages which evolve over time. The more data you gather (as for instance collecting new words as they appear), the greater accuracy, as most of nowadays’ used algorithms are based on probabilities (Hidden Markov model, Maximum Entropy, n-gram, etc..). As you might know, a rule of thumb in probabilities is to have a large sample of observations to make sure your models are reliable.

That said, when we read Hidden Markov model, MaxEnt, etc..it sounds a little scary at first, as the least we can say is that it’s not so easy to find people who are proficient in these things J. Guess what, it’s not necessary anymore because Azure Cognitive Services (which I’ll refer to as ACS as of now) will help you get rid of this burden!

ACS encompasses Microsoft LUIS (Language Understanding Intelligent Service) which shines in enabling end users (not even IT people) creating models that can be consumed by a Chatbot and any other kind of applications. The purpose of LUIS is to detect intentions and to capture entities, in NLP terms, the latter step is called NER (Named-entity recognition). The intentions are about what the intent of a human being, expressing himself in natural language, is, and the entities are about keywords that are relevant for your application domain.

As an example, in the sentence I’m looking for an Azure specialist, Azure will be an entity of type help_topic and the intention could be defined as expert_lookup. Once LUIS has extracted those things for you, your backend API can perform actions accordingly as for instance, looking into Yammer for people who master Azure, or querying Linkedin, etc.

I have been participating to an interesting Chatbot project very recently and here are the lessons learnt I want to share:  

The narrower scope, the better!

If you envision to develop a Chatbot that will be able to answer about any question/problem, you’d better review your expectations because Chatbots are not magic and AI isn’t that smart after all! Sorry to sound like a kill-joy but that’s simply the truth and it’s very easy to understand because languages are full of ambiguity and a given word might mean different things in different contexts. So, the broader your scope, the more complicated it becomes to deal with polysemy! A little example to illustrate this: what does the word crane mean? Well, in a Chatbot that is intended to ornithologists, it’s easy, it’s a bird but in the construction sector, it’s a machine…So, if you build a generic Chatbot, you will have to deal with such situations which make things much more complicated.  Similarly, handling neosemy (new meaning of an existing word such as tablet for instance) and homonyms becomes challenging as the scope gets broader…

On top of these basic examples, Microsoft LUIS comes with a limited amount of intents per model one can define as well as a limited amount of entities. So, in any case, at the time of writing, it’s not intended to build agnostic models that can deal with any utterance (phrase).

Day 1 is only the beginning of the Chatbot’s education

Don’t think the Chatbot will learn on its own! When designing a Chatbot, one must train Microsoft LUIS to undestand the topics we deal with but remember what I stated earlier: the more data, the better! So, don’t think your bot will be fully ready by Day 1. Why is that? Simply because when it comes to natural language, people express themselves differently to ask the same thing and you will not be able to tackle all the language representations of the intents you try to cover. Of course, LUIS does already a good job by matching things automatically for you but don’t expect it to be 100% right all the times. So, a serious follow up of the LUIS model(s) used by your Chatbot must be undertaken to make sure you can adjust LUIS’auto-detected intents in case of misunderstanding and leverage LUIS’active learning feature.

Natural language is damn complex

On top of the basic examples I used earlier, things like irony, humor, etc…are damn complicated and can challenge any NLP system. Bottom line: manage end users expectations about Chatbots capabilities and don’t blindly believe what vendors tell you! Yes things improve, yes you can do more and more with lesser efforts but we’re still far from having the so-called intelligent systems.

Other ACS services can help you improve the overall experience

  • Bing Spell Check may be used in combination with LUIS to automatically correct typos…and as you probably know, these are very         common!
  • Bing Auto Suggest API enables auto completion of user queries, thus maximizing user’s comfort.
  • Bing Web Search API can help you redirecting users to online results should your backend system encounter troubles in finding an
  • answer to a given question.
  • QnA Maker API makes it very easy to deal with existing or new FAQs that you’d expose through your Chatbot!

More advanced services also come handy when dealing with images, videos, etc..

What about Machine Learning?

Machine learning systems (such as Azure ML), are also very handy when it comes to Text Analytics. Their underlying probabilistic algorithms are also based on observations, and again, data is key. So before taking that route, you’d better make sure you have reliable data at your disposal. Of course, a lot of cleaning can be done at ML level, but still, if you put Garbage In, you’re likely get Garbage Out in return, meaning unrealistic probabilities! In the context of a Chatbot, ML can complement LUIS whenever you’d want to make predictions or detect anomalies based on historical data. Azure Machine Learning workflows are damn simple to use and they offer the possibility to compare various model performance at glance. Moreover, should you want to customize further your workflows, you can insert R activities at any time, and once ready, convert your ML workflow into a ready-to-consume API.

Real case

Now that I have recapitulated the pains & gains of designing & running Chatbots with Microsoft LUIS, let’s focus more on the high-level technical aspects. I will highlight a solution I have been using for a Chatbot that had to deal with incident management & knowledge base articles. The idea was to prevent users from creating incidents whenever an answer could be found in the KB as well as helping users to find colleagues who could help them whenever needed.

In a nutshell, these were the components leveraged by this Chatbot:

  • Microsoft bot framework to develop the bot itself and leverage its multi-channel capabilities.
  • Azure Active Directory to make sure the Chatbot could only be accessed by authorized people.
  • Microsoft LUIS as a semantic analysis layer, to understand the user queries and to extract entities.
  • A custom NLP API to convert the original user message to a SharePoint Query (as the KB articles were stored in SPO). After some internal testing, we realized that SPO was a better option than QnA Maker (for the time being) as it appeared to be more accurate. However, while QnA Maker is very easy to setup, translating user statements dynamically into a SharePoint Search Query adds a serious layer of extra complexity since I wanted to make sure not to send noisy words to the search engine.
  • Some other APIs to integrate with the incident management system.
  • Yammer for the who’s who and who knows what part.

Microsoft Bot Framework

This guy is really interesting, the multi-channel aspect of it is really interesting. At the time of writing, you have an out of the box connectivity to +- 20 channels (Facebook, Skype, Skype for Business, Teams, Web, etc…) and you can use the Direct Line Channel to build your own channel or to let non humans interact with your Chatbot. For instance, you might want to integrate your Chatbot with other applications which can leverage the Direct Line channel to interact with it. The framework is mostly in charge of handling conversations and comes with rendering components such as buttons, carousels, cards, etc.

QnA Maker vs SharePoint

QnA Maker is great because you can very quickly implement a knowledge base system. However, from our tests performed against version 1, it appeared that we had too many unsatisfying answers. QnA Maker is a black box, you don’t know exactly what’s going on and the biggest lack IMHO is the fact that you can’t tag questions. Because of this, we moved to SPO in order to map LUIS extracted entities with tags and get more accurate results.

We found QnA maker too noisy but that was still V1 and now Microsoft is already at V2 so things will most likely improve over time and it’s worthwhile to check if it improved in the meantime.

So, the idea was to transform the original user question into a SharePoint query:

Employee value proposition (EVP) (or employer value proposition) is a set of associations and offerings provided by an organization in return for the skills, capabilities and experiences an employee brings to the organization.

 

which results in something like this:
tags:onedrive AND (QnAQuestionOWSMTXT:(install NEAR onedrive NEAR smartphone)) OR (QnAAnswerOWSMTXT:(install NEAR onedrive NEAR smartphone)) XRANK(cb=100) QnAQuestionOWSMTXT:(install NEAR onedrive)

basically, we enforce SharePoint to search only questions tagged with OneDrive and to search for both questions & answers that contain the *most important words* while prioritizing the question field over the response one.

As you can see, a POS-Tagging NLP operation is done at step 4 to isolate the most important words (install, OneDrive, smartphone) and to get rid of noise (how, to, on, a). This cleaning is necessary because SharePoint would take them into account and would be too silent. On top of that POS-Tagging, I perform other operations such as Lemmatization and make use of an alias dictionary. For instance, in our context, words like: SharePoint, Collaboration Site, refer to the same thing and are stored in the dictionary. Hence again, the importance of having a limited scope to limit maintenance burdens…

Bottom Line

Chatbots are not magic, AI is not magic but nowadays’ systems are more and more efficient in understanding human beings thanks to the amount of information we have and our incredible computing resources. I focused on Chatbots but most of the concepts depicted in this short article apply to other kind of AI systems.