Cookie Notice

Wednesday, 30 January 2019

Updating WinForms with WPF/XAML… Adding MVVM

The intent of this blog entry is provide some guidance for WinForm developers wanting to migrate their apps to a more modern application architecture without throwing the existing app out and starting over completely.  I think that there is also some help here for XAML developers wondering how to make use of their spectacular UX design skills in updating an existing WinForm application (again without throwing out everything).

I hope to have an entry for each step in the process and you can feel free to jump in and out as you wish.  Once you have the basics, you might not care about adding MVVM but it’s here for you should you need it.

Updating WinForms with WPF/XAML
  1. Getting Started
  2. Adding MVVM
  3. Adding Commands
  4. Messaging between Technologies
  5. XAML Islands – Where we want to be!

3rd Party Libraries

Ok, so now we have our Hello World working, it’s time to start building out some of the smarts behind it.  We’ll take the simplest approach to this and use a great 3rd party library called MVVM Light toolkit.  This will quickly get us up and running with full separation of our UX and code.

We are starting from the conclusion of the last blog entry, the Basics.  You should already have a simple WinForms application with a WPF User control display the ubiquitous “Hello World” text.

So, to begin we are going to use NuGet to load the MVVM Light library.  You can get the MVVM Light libs either through the NuGet package manager or the package manager console.

To get it through the package manager follow these instructions:

Package Manager: Search for “MvvmLightLibs” from the “Browse” tab.  Install that library.  We install the libs only so we don’t get a bunch of the bits we don’t need in a WinForm.

OR

Console: PM> Install-Package MvvmLightLibs

The View Model

There are plenty of definitions of MVVM out there so I won’t bother you with those details.  Suffice it to say that we want a View that has our WPF/XAML control and we want a view model where we’ll handle all the code.  The last piece, the model we’ll squeeze in but it just contains the data definitions that we, hopefully, are connecting to our UX.

The view model will basically contain all the properties and methods that will be bound to our view.  For example, if we have two pieces of information we need entered, first and last name, we would have a first and last name property.  The catch is that we need to make sure those properties properly communicate with the View.  Fortunately MVVM Light gives all those methods pre-canned and ready to rock.  There is an interface under the covers called INotifyPropertyChanged that is monitored by bound controls in XAML.  In the included library we have that interface implement in a very simple way. 

Tutorial

  1. Right click on the ViewModels folder (created previously) and select Add/Class
  2. For consistency lets call the class “UserControl1VM”
  3. make the class “public”
  4. add “ViewModelBase” as the inherited class (it’s from MVVM Light and will give us all the bit we need to make use of INotifyPropertyChanged.
  5. I would also add a public constructor for this class called “public UserControl1VM()” so we have a place to initialize data or properties if we need to.
  6. public class UserControl1VM : ViewModelBase
    {
         public UserControl1VM()
         {
         }
    }
  7. Add one property to our VM. We’ll call it HelloVar.  We’ll do it as a full variable which will give us more control over notifications and the data.  For the “Set”
  8. private string helloVar;
    public string HelloVar
    {
        get => helloVar;
        set => Set(ref helloVar, value);
    }

  9. Add to the constructor a sort of pre-set to the HelloVar by adding the statement:

    HelloVar = “Hello World Bound”;
  10. That’s pretty much it for the simple data or our ViewModel.  All we need to do is bind the view model to the DataContext of the View and then bind to the property.
  11. Switch to the UserControl1.xaml.cs code behind for our user control.
  12. In the constructor after the InitializeComponent() method add
    DataContext = new UserControl1VM();
  13. If not prompted, add a using statement “WIndowsFormsApp1.ViewModels”.  Your namespace may vary depending on what you named things.
  14. All you need to do is now bind our TextBlock to the HelloVar property in our view model.  I would also suggest for fun, you change the TextBlock to be a TextBox (let’s you edit it.)

     <TextBox Text="{Binding HelloVar, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}"/>
If you now start the application you should find it starts with the text you placed in the view model constructor.  You can put a breakpoint on the setter for the HelloVar property and try changing the value in the application.  You’ll know that as soon as you change the value of the TextBox the setter breakpoint is hit.


You can now add as many properties as you like to your view model and “go to town” on binding that work.  You can use the view model to load data, save it out, do all the manipulations you need.  In our next blog I’ll show you how to add commands so you can bind to business logic, like saving data.
Next –> Updating WinForms with WPF… Adding Commands.

Wednesday, 23 January 2019

Updating WinForms with WPF… Getting Started

The intent of this blog entry is provide some guidance for WinForm developers wanting to migrate their apps to a more modern application architecture without throwing the existing app out and starting over completely.  I think that there is also some help here for XAML developers wondering how to make use of their spectacular UX design skills in updating an existing WinForm application (again without throwing out everything).

I hope to have an entry for each step in the process and you can feel free to jump in and out as you wish.  Once you have the basics, you might not care about adding MVVM but it’s here for you should you need it.

Updating WinForms with WPF/XAML
  1. Getting Started
  2. Adding MVVM
  3. Adding Commands
  4. Messaging between Technologies
  5. XAML Islands – Where we want to be!

XAML Islands

Microsoft recently announced XAML Islands at Microsoft Build 2018.  This nifty technology lets you essentially embed UWP XAML inside WinForms and WPF applications.  This is really wonderful technology that lets you bring Fluent Design language to WinForms and WPF.  It lets you bring design patterns like MVVM to WinForms.  This is truly awesome stuff but there is one caveat.  Your new updated app with XAML Islands will only install and run on systems running Windows 10 with .NET Core 3.  This is totally understandable as you simply cannot add Fluent design to a WinForms application that is primarily being used in a corporate Windows 7/8 environment or even a Windows 10 environment that hasn’t been brought forward by your IT department yet due to operating system limitations.

But there is a short term solution that will get you on the right track in WinForms.  It will let you start creating XAML pages that have full separation of UI and code using design patterns like MVVM with little or no trouble.  Once you have these pieces in place it becomes trivial to move forward to a fully Fluent Design when your Windows Platform catches up with your code.

Adding WPF/XAML

In this blog I’m going to discuss how adding WPF controls to your existing WinForms application can be achieved and what some of the benefits you can gain will be.  Not only will your UI gain a bit more of a modern look and feel but the relationships of your data with your views will be more separated yet work together with so much less code.

I have a customer with a legacy WinForms (VB/C# mix) application that needed updating.  The code had been in development for close to a decade and was becoming increasingly difficult to maintain.  When we discovered that one VB.net WinForm needed to be re-engineered, we wanted to implement it using a more modern WPF (XAML)/MVVM pattern.  There were several reasons behind that preference:

First, they want to eventually move the application to Windows 10/UWP but it’s such a complicated application that moving it all would be prohibitively complicated and expensive.  However the one form that has to be done can be replaced now.

Second, most, if not all, of their desktop systems are now running Windows 10 so we are not restricted in what we do by older operating systems.  I do want to stress that “most, not all” which prohibits us from using XAML Islands which are in preview for insiders as of this writing and not practical yet in a production environment.

Lastly, while the preference was to jump to UWP, the XAML Islands only supports a very few number of controls today (with more on their way) and is still in “Developer Preview”.  We might make use of them if they are relevant but the four controls currently in preview, the WebView, InkCanvas, InkToolbar and MediaPlayerElement were not something we needed for this particular application.  (although the Ink controls might be soon, once the hardware investment in ink enabled screens happens).

So WPF it was.  Adding WPF to a Windows Form is surprisingly simple.  The big thing is to carefully consider what exactly you are trying to accomplish.  So lets list our the goals.
  1. Use XAML to define the UX so it has a more modern feel and can be moved forward to UWP XAML without too much pain.
  2. Separate XAML from the Business logic that allows the form to do what it needs to do and providing a simple way to move that business logic forward to UWP without having to completely rewrite everything.
I really wanted the MVVM design pattern, a pattern particularly well suited to the XAML style of UX.  This would allow me to easily carry forward any code I made in the model and view model to any UWP application wanted in the future.  Here is how we did it as a mock-up.  You could easily start with an existing application but I find the first time I try something different it helps to get rid of the clutter by using a mock-up.

Tutorial

  1. Open Visual Studio 2017 (I’m using version 15.8.3).
  2. Click on New Project.
  3. Select Visual C#/Windows Desktop/Windows Forms App.  (Up to this point you could and quite likely will use your own existing WinForms application)
  4. Inside your Solution Explorer add the following folders:
    1. Views
    2. View Models
    3. Models
  5. Inside Views folder, add a New Item
  6. In the tree, select Visual C# Items/WPF/User Control (WPF)
    1. Essentially we are going to create a user control in WPF that does everything we want it to and embed that in a WinForm using the included ElementHost control
    2. So, add a simple TextBlock and good ‘ol “Hello World” to your control.
  7. Build the Application.  This will cause the control to appear in the “Toolbox” so you can add it to your WinForm.
  8. Open the WinForm
  9. Open the “Toolbox” and find your WPF control, probably called “UserControl1” unless you renamed it and drag and drop onto your form.
  10. Dock the control and you now have a WPF UX inside your WinForms App.

Of course, this just adds a really simple WPF form and you can go ahead and modify the XAML anyway you like.  Make sure you make it responsive (no fixed height or width if you can avoid it) so that it resizes with your WinForm.  Basically best practices for creating a WPF UX that is fully responsive is the way to go.

For those of you who are WPF developers you are off to the races.  In our next blog I’ll show you how to add the MVVM pattern.

Next –> Updating WinForms with WPF… Adding MVVM.



Friday, 29 December 2017

Improving the Bot for Sport Accessibility–Language Understanding

An Intent Beginning

The first thing we needed to do for the viaSport Accessibility Sport Hub (ASH) was to determine which basic questions we wanted to answer.  In bot terminology or more specifically the Microsoft Language Understanding Intelligence Service (LUIS) what “intents” were we trying to decipher and respond to.   The challenge is not be very specific but keep it simple.  Also, just identifying the “Intent” isn’t good enough.  Once we had that we had to think through the application development on how we would respond to a specific intent.  Microsoft Cognitive Services are great and the deciphering part but ultimately YOU, the developer have to sort out what to do with the information provided.

The first intent we decided on was “How To Coach”.  Basically the original mandate was to provide assistance to coaches who needed material that would guide them in coaching athletes with disabilities.  That information exists but is scattered and not necessarily readily available.  To make this work in our Bot application we had to do three things:

  1. Use LUIS to define the “How to Coach” intent.  This means adding the intent, providing a number of examples of possible questions that equate to “How to Coach”, training and publishing the service.
  2. Create some sort of database to hold the references that viaSport and reviewed and listed and make sure those references are tied to the new intent.  We wanted to have more intents later so allowance had to be made for this.
  3. Modify our Bot to accept the “How to Coach” intent and handle it.  This means recognizing the intent, querying the database that contains references for ones that match the intent and displaying it.  This part, good old coding, is the part that was expected to take the longest and was the highest risk area.

I’m hoping to cover each of these problems over the next few blog entries beginning with LUIS.  There is a lot to consider when you begin defining LUIS intents that may not be obvious when starting.

Is An Intent Enough?

Now that we’ve decided on an initial intent it’s time to consider what we will do with it.  In an information retrieval bot you ideally want to provide very specific results.  If your inquiry returns 200 possible answers, what good is it really?  So before we go to https://luis.ai to define our intent, lets consider what we really want to provide. 

We really need to know more than just “How to Coach”.  LUIS provides for a way of defining more details in the intent.  So all we need to do is come up with key bits of information that will help us narrow down the search for relevant resources for our coaches.  First, it’s obvious that the sport selected will have a big impact on the coaching information needed.  So with the intent of “How to Coach” we will want LUIS to extract from the natural language query from the coach what sport they wish to know about.  Sport will narrow down the information available but we decided on one more factor… disability.  Within each sport there are different disabilities that require different coaching techniques.  By adding that factor, the coach asking the question can get pretty specific with what they want.

Now we have an intent and two parameters, or what LUIS calls entities, that we can extract from the natural language inquiry.  All we need to do is create our LUIS application.

Working with LUIS

Start off by going to https://luis.ai.  You will need to logon with a Microsoft Account like @outlook.com or @hotmail.com.  If this is your first time with LUIS, there is a “Login/Sign up” link right on the home page where you can sign up (for free) for the service.

There have been many improvements since we started the viaSport project.  It used to be that you had to really spend a lot of time training it to recognize relevant entities but now, with just a few hints, it will generate many of the terms you need and then be able to deduce many more during operation.

The following steps were used to create the LUIS app for the viaSport Accessibility Sport Hub (ASH). 

  • It may seem really simple for the very good reason that it is very simple.  Begin by tapping the “Create new app” button on the “My Apps” page.  Give the app a name and description although the description is optional.  I pretty much only work with English bots at this point so select English for the language.  As of this writing there are 12 languages already available.
  • Tap the “Create new intent” button to add our “How to Coach” intent.  Give the name of the intent when prompted.  This is the name we’ll see in our Bot app when we are seeing which intent LUIS interpreted from the original input.  You might want to make a note of it.
  • Now all you need to do is give it some examples (what LUIS calls utterances) of the query that should be interpreted as “How to Coach”. Let’s do the first one with a sport and disability.  We’ll then have an example where we can identify our chosen entities (Sport and Disability).
    • How do I coach Hockey to a person who is an amputee.
    • Tap on “hockey” so you can mark it as an entity.  In the popup type “sport” then click “Create new entity”.  We want this one to be a “list” entity as we are going to limit this to specific sports.  Select List from the Entity Type drop down.  After selecting list you can enter synonyms for hockey.  I wish that this had been available when we built ASH.  We had to build our own synonym system instead.
    • Do the same thing with “amputee” creating a List entity called “disability”.
  • Now add 4 more utterances that contain a sport and disability.  You can use words other than coach, like teach or any variant you wish.
    • if you indicate a sport or disability different than our original ones, you’ll need to tap on the sport or disability and create a new synonym.
  • Lastly we want to add some more disabilities and sports to our lists of entities we’ll recognize.  In one of the utterance tap the word “sport” then “Manage Entity”
    • Notice right away that at the top are listed a whole bunch of other sports.  Now, ideally we want to add sports that we will be providing information on but you can go ahead and add a bunch of sports for now.  In our case we needed to add para-specific sports like para-skiing, wheelchair basketball, etc…
    • We’ll want to do the same thing for our disability entity.

One of the keys to a great app is preparation and planning around what you will recognize.  If you think that you’ll have a sport called Hockey but there will be lots of other ways to describe hockey but will, for our purposes, be the same, then add lots of synonyms like, in our case, “sledge hockey” and “para-hockey”.

Once you have done all of the above, all you need to do is train and publish your LUIS app and you’ll be ready to call it.  Just click the “Train” button then you can go ahead and test it.  Once you are happy, tap the “Publish” tab and publish the app.

You will need the Endpoint listed at the bottom of the publish page.  Make a separate note of the Key String (if it’s not there, you’ll need to create it) and the App Id (looks like a GUID and can be found in Settings tab).

What’s Next?

We are now ready to move to our bot app in Visual Studio to make it all talk to LUIS and start understanding natural language.

Friday, 10 November 2017

Building a Bot for Sport Accessibility

Earlier this year I had the opportunity to help a local non-profit sports organization (NPO) get information on sport disability and accessibility out to athletes, teachers, parents, coaches and sport leaders.  I was fortunate enough to work with some great experts from the NPO as well as Microsoft.  After some initial discussions we came to the conclusion that we would go with a natural language interface otherwise known as a bot.  We decided on using Microsoft’s Bot Framework and at least one Microsoft Cognitive Service, the Language Understanding Intelligent Service (LUIS).  This blog will be about some of the decision making that went into choosing these technologies instead of others and what benefits were gained by our choices.

Chat or Click

When we first sat down with viaSport British Columbia, the NPO to discuss their needs, they made it very clear they wanted something different.  There have been plenty of attempts in the past to provide information to their constituents and there were examples of other organizations, such as the Canadian Paralympic Committee, that have provided some of the information. 

One of the methods used in the past was to have a series of cascading combo boxes where you selected some property which would then set a selection list for the next combo drop down box.  You would work your way through these items until you came to the end.  One of the problems with this is it assumes a certain level of knowledge of what question you want answered.  You also have no flexibility in decisions made during the process.  To continue you must select something even when “your” option wasn’t available.  viaSport (our NPO) wanted something a bit more friendly and more in line with the way its members commonly communicated with each other.  Sending messages and typing on their phone or computer using natural language seemed to be a natural (pun unintended) fit.  We decided on chat instead of click and implement a chat bot using the Microsoft Bot Framework and natural language with LUIS and Microsoft Cognitive Services.  Later on, we added the use of some other cognitive services.

Getting Started

The initial goal was to simply allow their constituents to visit their web site and ask, in plain language, for information specified by sport, disability and person asking for the information.  For example, a coach might ask “How do I coach swimming to a paraplegic” and the goal would be to provide reference material helping that coach to provide guidance to the athlete specified.  To accomplish that goal we needed only two elements and a hosting service.  The Microsoft Bot Framework could nicely handle the conversation part of things (back and forth), with a C# code behind, doing the lookup of the information and Microsoft’s LUIS Cognitive Service to provide the natural language understanding of the questions being asked. 

Ultimately the thing that would make the project a success, as with most projects, is the quality of the data or information we could provide to the the clients.  The whole point of the exercise was to provide curated information to the people who needed it without the massive amounts of information you would get from a Google or Bing search.  So, combined with our bot and natural language processing we used Microsoft Azure SQL database to store the curated list of information and a Microsoft Azure App Service.  Setting of all Cognitive Services is done through Azure also, so signing up, initially, for the free trial let us move forward with a prototype almost immediately.

Starting Coding is Fun!

I chose C# as the language of choice for a couple of reasons.  I could have done the bot project in NodeJS but chose not to.  First is that I’m very familiar with C#.  I’ve written hundreds of applications using C# and it seemed the logical place to start.  Also, I realized very early that not only did I have to create the Bot but an administrative tool for viaSport to manage their curated links and content.  That tool would be best written in C# as a UWP app.

AppService-BotServiceSo, to begin we used a template provided by Visual Studio 2015.  It created a very simple template that didn’t include anything to do with LUIS and required that all the connectivity to LUIS had to be done by me.  The good news is, since we created our bot, there now an easier way.  Inside Azure, you can simply create a new “AI + Cognitive Service” service and one of the choices is “Bot Service (Preview)”.  This creates the whole framework for you to begin, including all the hooks you need into LUIS.  It’s like magic!

BotTemplateOnce you create the Bot Service you are given a choice of 5 (currently) different templates for the bot using two different languages, C# or NodeJS.  The templates cover everything from a simple “echo bot” where it just echo’s back the text input to a Q&A type bot to a bot using Azure Functions to the one we want, a bot template that will automatically bake in Language Understanding (LUIS).  Having that template would have been very helpful in the early days of creating this bot.  Once you have chosen your template the template wizard will walk you through all the options you need to create that bot including assigning the App Id (needed for publishing) and password (take note of all the ids and passwords, you won’t be able to get them later).  Once you have created the App, it will provision the LUIS service for you and you will be ready to start defining intents (basically an intent is what the natural language will be interpreted as, for example “How to Coach”). 

BotService_EditCodeAll the starter code with all the keys already in place will be created for you.  You can easily hit the road running.  The Bot Service even provides an on-line editor that looks a lot like Visual Studio or you can download the source and use it with Visual Studio 2017 (even the free community edition) or lastly, have it uploaded to a source control provider like Visual Studio Team Services or GitHub.  You could literally deploy the Bot code as is and start using it (once you put some intents in).  The on-line editor is surprisingly useful and lets you get going right away.  You can even run and test your bot or debug it using the Bot Framework Emulator (they give you a link to download and install the emulator).

The End of the Beginning

At this point we had our prototype.  The next steps I’ll outline in a later blog involved building the back end database containing the content and an administrative tool for managing that content.  My goal was to hand off the project to viaSport without them needing to call me for every little thing.  I didn’t want to be a dependency for them moving forward.  I made myself available to help out but most of what they wanted to do they could easily do on their own, including manage the content, track usage and other telemetry and make improvements to the bot understanding, all without having any developers on staff or needing to contract one.

As I work through the building of this application over the next few blog entries, we’ll look into Windows UWP App development, other Cognitive Services, inclusivity considerations and telemetry. 

Tuesday, 17 October 2017

After Ignite–Bots and Brains Meetup–Part 1

What is a Community Meetup at Ignite

This year, in September I attended Microsoft Ignite in Orlando Florida for the first time in many years (since the days of TechEd actually) and for the first ever as a speaker.  I was deeply honoured to be selected as a speaker and even more so that I was selected for what is a relatively new format, the Community Meetup format.

I took my role seriously and spent a good amount of time preparing for this unusual presentation format.  Typically at these sorts of events the presenter gets up at the front of the room with a projector, some PowerPoint slides and perhaps a copy of Visual Studio to show some code.  At a community meetup presentation the participation shifts a bit to the attendees.  To get the most out of the session, attendees have to find the subject interesting, have some opinions and be willing to share those opinions.

As the community meetup host my role was to present the ideas and concepts to the attendees backed with my expertise and experience in the field and hopefully my comfort in facilitating discussion.  Our session (for it was both my and the attendees session) was BRK2388 “Bots and Brains: The Microsoft Bot Framework and Cognitive Services, better together”.  The idea was to discuss the difficulties with making a chat bot application that was responsive to the needs of people using it.  This included three main topics, frustration, inclusive design and deciding if a chat bot was even the best delivery mechanism for the application.

Our room held only 85 people and I’m happy to say we filled the room.  However, I do regret that perhaps hundreds were turned away at the door as our room and format dictated a much smaller attendance than were registered for the session.  I will continue to work with Microsoft to improve the format as I feel that the format presented a number of valuable benefits to attendees that they might not get in other more traditional passive sessions.  I believe that attending a well run community meetup should offer a great chance to network with other people interested in the same topic.  It offers the chance to come up with some possible solutions or considerations to problems that many of us face on a regular basis and perhaps just as importantly give a chance to be actively involved in the Ignite conference as opposed to the typical passive nature of attending presentations.

This is the room I spoke in.  Seats 85 people.  350 showed up.










Getting Started

So… what did we do?  We had a room of 85 enthusiastic attendees that came ready to participate.  We had ten tables of 6 people each with additional chairs around the perimeter.  We began by getting everybody to introduce themselves at their particular table.  My plan was to get each table to work together as a team.

Next we introduced everybody to a shared OneNote notebook that would provide all the attendees a place to take notes that we would make available after the conference.  Each table would hopefully record notes of discussions and ideas right in the shared OneNote notebook (It turns out that they did… enthusiastically, more on that content in Part 2)

Now it came down to me where I introduced the topic we wished to discuss and more specifically what questions we would be addressing as a community.  I began by introducing my background in the Microsoft Bot Framework and Cognitive Services, my expertise in application development and some examples of  Bot/AI projects I have worked on.  Then we got to the meat of the meeting.  In a chat bot, how do you deal with Bot Frustration?  How do you make a chat bot that includes everybody and lastly how do you decide if a chat bot is the right solution in the first place.

When we got started with the first topic I was surprised to see everybody at each table actively participating in the table discussion and even more surprised that those sitting along the walls formed ad-hoc discussion groups without a table and began working on ideas.  There did not seem to be anybody I noticed that was left out of the discussion.  That made me feel pretty good.

What Could have been Better

I wish, in some ways that the session had been recorded for others to enjoy some of the feedback but it might not have presented well as a recording as there were 5-8 minute gaps of discussion where nothing obvious was happening to a recorded session.   I also wish that we could have accommodated the many people who were not permitted entry to the session due to seating availability. 

The difficult balance was between a community feel where everybody could participate and having so many people in the room that it becomes difficult to let everyone’s voice be heard.  We could easily have had a much bigger room available to a lot more people, but at what cost?  This is one of the challenges we will need to address for next year.  The fact that the meetup sessions were so popular was gratifying, concerning and encouraging, all at the same time.

I wonder if the elevated stage was more of a barrier to the presenter, me.  I spent a lot of time walking among the tables, listening to the discussions, offering some suggestions and advice and eliciting feedback from various tables and having to climb up to a podium separated me from the groups perhaps too much.

What Worked

The table discussion format actually exceeded my expectations.  Before the session I really didn’t know if what I had planned would work, would be fun and would provide valuable information to the attendees.  I thought it would (else why would I do it?) but it was satisfying to see that it worked out even better than expected.

We ended up with a whole bunch of information collected in the OneNote notebook on all three topics.  Some tables came up with some interesting names for their “team” (Revolving Table, Wall, Table Unique, etc…) and people seemed to have fun.  At the end of each 8 minute period of discussion we had two or three tables present their ideas using hand-help microphones we had available.  There were some great ideas (Part 2).

The size of the room was great for discussion purposes and Microsoft really spent quite a bit of time thinking of how this session format would work to reflect the community meetup feel.  We were, I think, the only room that had it’s own permanent snack station for attendees.  We had a Microsoft Surface Hub provided for which I used it to host the OneNote notebook so we could see progress being made and we had great technical support in the room at all times.

Part 2 –  The Results

After Ignite–Bots and Brains Meetup–Part 2, the Results

So, we have discussed what exactly a Community Meetup session at Microsoft Ignite is in Part 1.  Please feel free to have a read through to see what we did and how we did it.  This entry is to discuss the results of that session at Ignite.  During the session multiple groups of people came up with some great talking points on the three main topics covered concerning the Microsoft Bot Framework and Cognitive Services.  We discussed what I thought were three of the main points in creating bots using the Microsoft Bot Framework.  The three topics were, for the most part, focused around how Microsoft Cognitive Services could help.  We discussed user frustration when using bots, what were and were not problems that could be solved with a bot application and lastly how to make bots more inclusive to all people.

We had 10 tables of 6 people and, at least for the first topic, we had recorded responses from twelve distinct teams including one of my favorites “the Wall” team which consisted of people who were sitting in chairs along the wall that banded together to make a discussion group.  Some people along the wall chose to join existing tables for discussion.  It was great!

Most of the information below is not necessarily my opinion but that of people attending the Microsoft Ignite Community Meetup BRK2388, Bots and Brains.

Bot Frustration

The premise of this section was threefold.  How do I prevent frustration?  How do I detect frustration? and how do I resolve frustration?  All three points should be important considerations when creating a chat bot interface to an application.  Before arriving in Orlando I had some pre-conceived notions of how to detect frustration.  It was interesting how many different options the group came up with that I had not thought of.

With regards to detecting frustration there were plenty of suggestions.  Microsoft Cognitive Services provides a service called the Text Analytics API that lets you detect in a range of 0 to 100% very negative or very positive sentiment.  Another team suggested using the Language Understanding Intelligent Service or LUIS to detect curse words indicating frustration.  Outside of sentiment and curse words, suggestions were made to look for repeated questions and entities (parameters) indicating frustration at not getting a correct answer.  Overall if we consider the three main ideas above I believe we can be fairly competent at detecting most forms of frustration.

Before frustration ever happens, there are things we can do to prevent it from starting.  One group suggested using training data based on past feedback to ask leading questions to help get more accurate answers.  Some of the suggestions were beautiful in their simplicity.  For example provide the option to see a tutorial as we did for a project I worked on called the Accessibility Sport Hub or ASH.  The tutorial nicely sets expectations and suggestions on getting the most out of the bot.  It also outlines what the bot can, and just as importantly cannot do.  One of the responses seems to apply to this category as well as what to do to resolve frustration after it has occurred.  Hand the conversation off to a real person.  In the case of prevention, as soon as the bot can detect that it may not be able to resolve the inquiry, have a method of handing off the inquiry to a real person.  The bot can be the method of collecting information and a real person is used to resolve the problem that might not be addressable by the bot.

Lastly what do we do when things go badly and we have a frustrated or even angry person using our bot.  The most often repeated solution here is recognizing when human intervention is required and hand off the bot to a qualified individual for resolution.  Another suggestion by one of the “Wall” groups was to provide additional options or help when frustration is encountered.  Perhaps they don’t know what the bot is capable of and providing them the option of learning what is available might help.  Improving the feedback to the user was also suggested and this can also fall under the heading of prevention. 

Humour is something else that can be introduced.  Giving the bot a bit of a personality can help build a friendly relationship and perhaps reduce stress levels.  One caveat to consider is sometimes humour is not universal and one should be cautious that you don’t introduce more problems than you create.  One of the most popular suggestions was to use the “panic button” method of providing a way to ask for help while in the middle of the conversation.  If you provide an escape or some way to get back to the start and/or get help it might alleviate some of the frustration being created.

Bot Solutions

What makes a good bot solution?  It’s a question that isn’t asked often enough in development.  “What is the best way to do this” rather than “I know how I can do this”.  So we asked the community meetup what they thought.  It was interesting what considerations they came up with when trying to decide what sorts of applications would be best as chat bots.

One of the more interesting considerations is the target age group of the app.  As strange as it may seem different generations of people approach how they communicate in different ways.  The other one that was interesting was globalization and how bots might be better at dealing with multiple languages than many other solutions. 

Once we got past demographics there were suggestions that a solution that might require a more specific answer faster than searching Bing or Google might find a directed bot conversation more useful.  Also the ability to handle difficulties interactively would be helpful.  A bot would be useful at connecting people to internal data repositories like information on employment, HR and general inquiries would be ideal for a bot. 

Under the “not so useful” category, several tables/groups suggested that an “advice” type application might not be so useful as pure chat bots.  For example, providing legal or medical advice might have moral and legal barriers where the repercussions of providing improper or incorrect information are significant.  For example, we probably don’t want bots handling 911 calls at this time. Although, a hybrid solution where a bot collects basic information and then hands off the session to a human might be reasonable in some cases.

Lots of groups felt that bots are not very helpful for very complex needs.  For bots to work, somebody has to write code for the responses and depending on whether or not the path people using the bot need to follow has many branches or not could affect the effectiveness of the bot.

Inclusive Bots

This category turned out to be a bit more difficult than the others.  In many ways people feel it’s difficult to relate to the issues at hand.  However, once you consider that building an inclusive bot is not about building a bot for specific people but, instead, for all people, you can start to see ways to help in that area.  Including everybody does not mean just disabilities but also, culture, language, gender and a whole host of other variables that make people unique.

One of the items that came up is voice.  Microsoft nicely announced at Build 2017 the ability for a bot to be available as a Cortana skill basically enabling speech to text and text to speech handling so your bot doesn’t require keyboard or mouse input.  That’s a huge leap forward in usability and inclusivity.  Other items discussed were using Cognitive Services like the Computer Vision API to discern the age of a user. 

Also using images in providing information might help work around language barriers (although the Bot Framework does support many language, it might not cover all of them).  Addressing gender differences were also mentioned and here vision can help too by detecting gender using the Computer Vision API.

To Wrap it Up

Privacy was not mentioned and I think we always need to be aware of people’s privacy.  When we use various Cognitive Services we should always make sure we have permission where necessary to provide people with the choice.

Overall, it was quite remarkable how many different opinions and ideas came out of our room of just 85 people.  Everybody stayed right to the end to make sure they got their voice heard.  The OneNote notebook is available to those that attended my session and you can view it too if you wish.  Also, join the conversation at at the Microsoft Tech Community!

I hope that everybody got something out the session.  Maybe met a like minded person, found some information from myself or others in the room that might help in decision making over bots and AI or just generally enjoyed the break from the traditional session.  I thank all the attendees and hope that I have the opportunity to do it again next year.

Wednesday, 19 October 2016

Talking about Bots – Setting it up

Have you thought about using the Microsoft Bot Framework?  I have!  If you’ve ever wondered, I’m going to try and give you a hand up by pointing you in the right direction.  Maybe give you a few tips along the way and help clarify a few things along the path.

Getting Started

connector-getstarted-create-projectYour first step is to install the Bot Framework Template into your Visual Studio (I’m assuming you have Visual Studio installed already).  I guess the first step is to decide what language you are going to build in.  Because I am a Windows Platform sort of guy, I’m going to use C#, if you are too then it will make this a lot easier to follow along.  You can find the “Getting Started” page with the links at https://docs.botframework.com/en-us/csharp/builder/sdkreference/gettingstarted.html but you can directly download the template from here.  When you look at the getting started document, it does give you most of what you need to get going but ultimately you are going to have to connect a few dots and turn on a few lightbulbs on to get to where you want to go.  Once you have downloaded the template and created your first project, come back here for next steps (not in the above doc)

Creating Your Bot App

Your next step requires you to create a Bot in the Bot Framework.  You are going to have to modify a couple of things in your just created Bot Framework app in Visual Studio based on what you do here.  First, go to http://dev.botframework.com.  You will need to have a Microsoft Id to successful complete this next task.  Click on “Register a bot” to get started.  Don’t worry if you don’t know everything, you can fake it a little bit until you do some other stuff later.  You need to know your bot name and bot handle.  You can makeup stuff here.  You will need to create a BotWebConfigMicrosoft App ID and password, but they nicely give you a button you can press to do that.  Make a note of the App ID and password because you’ll need those in the Web.config file of your bot application.  For the publisher profile, you can fill in most of the URLs with placeholders, but you will need to provide those before going live in any way.  In your Web.config, put the id and password as you see in this picture.  There are security issues with putting ids and passwords in your Web.config but for just playing around locally on your system, its not a problem and security is not part of this post.

Testing Your Bot Locally

Bot EmulatorYou need to install the Microsoft Bot Framework Channel Emulator.  It won’t look as pretty but it will let you see what is going on.  Once you have installed this little emulator, go ahead and start the app in Visual Studio.  This should launch a browser (you can choose which one), make note of the port # in the URL that is loaded.  Now open the emulator and change the “Bot Url” to match the port number above.  Don’t change anything else.  Lastly, paste in the App Id and App Password and you should be good to go.
In Visual Studio, open the “MessagesController.cs” file and put a breakpoint on line 22 (inside the “Post” method.  Now go back to the emulator and Type “Hello” into the message space and hit enter.  You can now step through how the bot responds to the question (it just parrots back what you said with a char count).  Now you can start seriously thinking about what smarts you want to add to your bot using whatever AI you can think of.

Ready for More?

Now that you have a (somewhat) stupid bot, what do you do?  Well, you’ll need to deploy it to an Azure App Service or some other web site if you want the public to get access to it.  You’ll also need to add some brains to it.  Like what happens when somebody says hello.  You know, add a little personality to the game.  More on that in future blogs.