Module6: QnA Maker and Module7: Conversational AI and Azure Bot service
Introduction to QnA Maker
A common pattern for “intelligent” applications is to enable users to ask questions using natural language, and receive appropriate answers. In effect, this kind of solution brings conversational intelligence to a traditional frequently asked questions (FAQ) publication.
The QnA Maker service enables you to define a knowledge base of question and answer pairs that can be queried using natural language input. The knowledge base can be published to a REST endpoint and consumed by client applications, commonly bots.
There are two versions of the QnA Maker service. To use the managed QnA Maker service, create a Text Analytics resource in your Azure subscription. To use the non-managed QnA Maker service, create a QnA Maker resource. You can use either resource in the QnA Maker Portal to edit and publish a knowledge base.
QnA Maker vs Language Understanding
A QnA Maker knowledge base is a form of language model, which raises the question of when to use the QnA Maker service, and when to use the Language Understanding service.
The two services are similar in that they both enable you to define a language model that can be queried using natural language expressions, however there are some differences in the use cases that they are designed to address, as shown in the following table:
QnA Maker | Language Understanding | |
Usage Pattern | User submits a question, expecting an answer | User submits an utterance, expecting an appropriate response or action |
Query processing | Service uses natural language understanding to match the question to an answer in the knowledge base | Service uses natural language understanding to interpret the utterance, match it to an intent, and identify entities |
Response | Response is a static answer to a known question | Response indicates the most likely intent and referenced entities |
Client logic | Client application typically presents the answer to the user | Client application is responsible for performing appropriate action based on the detected intent |
The two services are in fact complementary. You can build comprehensive natural language solutions that combine both Language Understanding models and QnA Maker knowledge bases, and use the Dispatch tool to provide a routing layer that determines which service should be used to process a given user input.
Creating a Knowledge Base
To create a QnA Maker solution, you can use the REST API or SDK to write code that defines, trains, and publishes the knowledge base. However, it is more common to use the QnA Maker portal to define and manage a knowledge base.
To create a knowledge base:
Create an Azure resource in your Azure subscription.
To use the managed QnA Maker service: create a Text Analytics resource.
To use the non-managed QnA Maker service: create a QnA Maker resource.
In the QnA Maker Portal, connect the resource to a new knowledge base.
Name the knowledge base.
Optionally, populate the knowledge base with existing question and answer pairs:
You can import questions and answers from existing web pages or documents.
You can add pre-defined “chit-chat” pairs that include common conversational questions and responses in a specified style.
Create the knowledge base and edit question and answer pairs in the portal.
Multi-Turn Conversation
Although you can often create an effective knowledge base that consists of individual question and answer pairs, sometimes you might need to ask follow-up questions to elicit more information from a user before presenting a definitive answer. This kind of interaction is referred to as a multi-turn conversation.
You can enable multi-turn responses when importing questions and answers from an existing web page or document based on its structure, or you can explicitly define follow-up prompts and responses for existing question and answer pairs in the QnA Maker portal.
For example, suppose an initial question for a travel booking knowledge base is "How can I cancel a reservation?". A reservation might refer to a hotel or a flight, so a follow-up prompt is required to clarify this detail. The answer might consist of text such as “Cancellation policies depend on the type of reservation.” and include follow up prompts with links to answers about canceling flights and canceling hotels.
When you define a follow-up prompt for multi-turn conversation, you can link to an existing answer in the knowledge base or define a new answer specifically for the follow-up. You can also restrict the linked answer so that it is only ever displayed in the context of the multi-turn conversation initiated by the original question.
Testing and Publishing a Knowledge Base
After you have defined a knowledge base, you can train its natural language model, and test it before and publishing it for use in an application or bot.
Testing a knowledge base
You can test your knowledge base interactively in the QnA Maker portal, submitting questions and reviewing the answers that are returned. You can inspect the results to view their confidence scores as well as other potential answers.
You can also download the batch testing tool and submit a set of questions with known answers and compare the results.
Publishing a knowledge base
When you are happy with the performance of your knowledge base, you can publish it to REST endpoint with a generateAnswer function that client applications can use to submit questions and receive answers.
Client Interfaces
To consume the published knowledge base, you can use the REST interface or one of the programming language-specific SDKs, which provide classes with methods to call the generateAnswer REST function.
The request body for the function contains a question, like this:
{ "question": "I want to book a hotel."}The response includes the closest question match that was found in the knowledge base, along with the associated answer, the confidence score, and other metadata about the question and answer pair.
Active Learning
Enabling Active Learning for your knowledge base can help you make continuous improvements so that it gets better at answering user questions correctly over time. To enable active learning, view the Service Settings page in the QnA Maker portal and select the toggle.
Active learning helps improve the knowledge base in two ways:
Implicit feedback: As incoming requests are processed, QnA Maker identifies user-provided questions that have multiple, similarly scored matches in the knowledge base. These are automatically clustered as alternate phrase suggestions for the possible answers that you can view in the portal and choose to accept or reject.
Explicit feedback. When developing a client application you can control the number of possible question matches returned for the users input by specifying the top parameter, as shown here:
{"question": "I want to book a hotel.","isTest": false,"top": 3}The response from the service includes a question object for each possible match, up to the top value specified in the request:
{"answers":[{"questions":["How do I book a hotel?"],"answer":"Call 555-123-4567 to book.","score":76.55,"id":2,...},{"questions":["Can I book multiple hotel rooms?"],"answer":"Yes, you can reserve up to 3 rooms.","score":76.15,"id":6,...},{"questions":["Is there a booking fee?"],"answer":"No, we do not charge a booking fee.","score":75.99,"id":11,...}],"activeLearningEnabled":true}You can implement logic in your client app to compare the score property values for the questions, and potentially present the questions to the user so they can positive identify the question closest to what they intended to ask.
With the correct question identified, your app can use the Train API to send feedback containing suggested alternative phrasing based on the user's original input:
{"feedbackRecords": [{"userId": "1234","userQuestion": "I want to book a hotel.","qnaId": 2}]}The qnaId in the feedback corresponds to the id of the question the user identified as the correct match. The userId parameter is an identifier for the user and can be any value you choose, such as an email address or numeric identifier.
The feedback will be presented in the active learning suggestions in the QnA Maker portal for you to accept or reject.
Creating a QnA Bot
While you can use a QnA Maker knowledge store in any sort of application, they are commonly used to support bots.
A bot is a conversational application that enables users to interact using natural language through one or more channels, such as email, web chat, voice messaging, or social media platform such a Microsoft Teams.
QnA Maker is often the starting point for bot development - particularly for conversational dialogs that involve answering user questions. For this reason, the QnA Maker portal provides the ability to easily create a bot that runs in the Azure Bot Service based on your knowledge base.
To create a bot from your knowledge base, use the QnA Maker portal to publish the bot and then use the Create Bot button to create a bot in your Azure subscription. You can then edit and customize your bot in the Azure portal.
Conversational AI and Bots
A bot is an application with a conversational interface.
While there are many ways you can implement a bot, some common features of bots include:
Users interact with a bot by initiating activities in turns.
Activities are events, such as a user joining a conversation or sending a message.
Messages can be text, speech, or visual interface elements (such as cards or buttons).
A flow of activities can form a dialog, in which state is maintained to manage a multi-turn conversation.
Activities are exchanged across channels, such as web chat, email, Microsoft Teams, and others.
zure Bot Service and the Microsoft Bot Framework SDK
Bot solutions on Microsoft Azure are supported by the following technologies:
Azure Bot Service. A cloud service that enables bot delivery through one or more channels, and integration with other services.
Bot Framework Service. A component of Azure Bot Service that provides a REST API for handling bot activities.
Bot Framework SDK. A set of tools and libraries for end-to-end bot development that abstracts the REST interface, enabling bot development in a range of programming languages.
Developing a Bot with the Bot Framework SDK
The Bot Framework SDK provides an extensive set of tools and libraries that software engineers can use to develop bots. The SDK is available for multiple programming languages, including Microsoft C# (.NET Core), Python, and JavaScript (Node.js)
Bot templates
The easiest way to get started with the Bot Framework SDK is to base your new bot on one the templates it provides:
Empty Bot - a basic bot skeleton.
Echo Bot - a simple “hello world” sample in which the bot responds to messages by echoing the message text back to the user.
Core Bot - a more comprehensive bot that includes common bot functionality, such as integration with the Language Understanding service.
Bot application classes and logic
The template bots are based on the Bot class defined in the Bot Framework SDK, which is used to implement the logic in your bot that receives and interprets user input, and responds appropriately. Additionally, bots make use of an Adapter class that handles communication with the user's channel.
Conversations in a bot are composed of activities, which represent events such as a user joining a conversation or a message being received. These activities occur within the context of a turn, a two-way exchange between the user and bot. The Bot Framework Service notifies your bot's adapter when an activity occurs in a channel by calling its Process Activity method, and the adapter creates a context for the turn and calls the bot's Turn Handler method to invoke the appropriate logic for the activity.
The logic for processing the activity can be implement in multiple ways. The Bot Framework SDK provides classes that can help you build bots that manage conversations using:
Activity handlers: Event methods that you can override to handle different kinds of activities.
Dialogs: More complex patterns for handling stateful, multi-turn conversations.
Testing with the Bot Framework Emulator
Bots developed with the Bot Framework SDK are designed to run as cloud services in Azure, but while developing your bot, you'll need a way to test it before you deploy it into production.
The Bot Framework Emulator is an application that enables you to run your bot a local or remote web application and connect to it from an interactive web chat interface that you can use to test your bot. Details of activity events are captured and shown in the testing interface, so you can monitor your bots behavior as you submit messages and review the responses.
Activity Handlers
For simple bots with short, stateless interactions, you can use Activity Handlers to implement an event-driven conversation model in which the events are triggered by activities such as users joining the conversation or a message being received. When an activity occurs in a channel, the Bot Framework Service calls the bot adapter's Process Activity function, passing the activity details. The adapter creates a turn context for the activity and passes it to the bot's turn handler, which calls the individual, event-specific activity handler.
The ActivityHandler base class includes event methods for the many kinds of common activity, including:
Message received
Members joined the conversation
Members left the conversation
Message reaction received
Bot installed
Others…
You can override any activity handlers for which you want to implement custom logic.
Turn context
An activity occurs within the context of a turn, which represents a single two-way exchange between the user and the bot. Activity handler methods include a parameter for the turn context, which you can use to access relevant information. For example, the activity handler for a message received activity includes the text of the message.
Dialogs
For more complex conversational flows where you need to store state between turns to enable a multi-turn conversation, you can implement dialogs. The Bot Framework SDK dialogs library provides multiple dialog classes that you can combine to implement the required conversational flow for your bot.
There are two common patterns for using dialogs to compose a bot conversation:
Component dialogs
A component dialog is a dialog that can contain other dialogs, defined in its dialog set. Often, the initial dialog in the component dialog is a waterfall dialog, which defines a sequential series of steps to guide the conversation. It's common for each step to be a prompt dialog so that conversational flow consists of gathering input data from the user sequentially. Each step must be completed before passing the output onto the next step
For example, a pizza ordering bot might be defined as a waterfall dialog in which the user is prompted to select a pizza size, then toppings, and finally prompted for payment.
Adaptive dialogs
An adaptive dialog is another kind of container dialog in which the flow is more flexible, allowing for interruptions, cancellations, and context switches at any point in the conversation. In this style of conversation, the bot initiates a root dialog, which contains a flow of actions (which can include branches and loops), as well as triggers that can be initiated by actions or by a recognizer. The recognizer analyzes natural language input (usually using the Language Understanding service) and detects intents, which can be mapped to triggers that change the flow of the conversation - often by starting new child dialogs, which contain their own actions, triggers, and recognizers.
For example, the pizza ordering bot might start with a root dialog that simply welcomes the user. When the user enters a message indicating that they want to order a pizza, the recognizer detects this intent and uses a trigger to start another dialog containing the flow of actions required to gather information about the pizza order. At any point during the pizza order dialog, the user might enter a message indicating that they want to do something else (for example, cancel the order), and the recognizer for the pizza ordering dialog (or its parent dialog) can be used to trigger an appropriate change in the conversational flow.
Deploying a Bot to the Azure Bot Service
After you've completed development of your bot, you can deploy it to Azure. The specific details of how the bot is hosted varies, depending on the programming language and underlying runtime you have used; but the basic steps for deployment are the same.
Create the Azure resources required to support your bot
Your will need to create an Azure application registration to give your bot an identity it can use to access resources, as well as a bot application service to host the bot.
Register an Azure app
You can create the application registration by using the az ad app create Azure command line interface (CLI) command, specifying a display name and password for your app identity. This command registers the app and returns its registration information, including a unique application ID that you will need in the following step.
Create a bot application service
Your bot requires a Bot Channels Registration resource, along with associated application service and application service plan. To create these, you can use the Azure resource deployment templates provided with the Bot Framework SDK template you used to create your bot. Just run the az deployment group create command, referencing the deployment template and specifying your bot application registration's ID (from the az ad app create command output) and the password you specified.
Prepare your bot for deployment
The specific steps you need to perform to prepare your bot depend on the programming language used to create it. For C# and JavaScript bots, you can use the az bot prepare-deploy command to ensure your bot is properly configured with the appropriate package dependencies and build files. For Python bots, you must include a requirements.txt file listing any package dependencies that must be installed in the deployment environment.
Deploy your bot as a web app
The final step is to package your bot application files in a zip archive, and use the az webapp deployment source config-zip command to deploy the bot code to the Azure resources you created previously.
After deployment has completed, you can test and configure your bot in the Azure portal.
Bot Design Principles
Now that you understand the basics of creating a bot, it's time to consider some principles for designing a successful bot solution.
Factors influencing a bot's success
Ultimately, factors that lead to a successful bot all revolve around creating a great user experience.
Is the bot discoverable? If users are not able to discover the bot, they will be unable to use it. Discoverability can be achieved through integration with the proper channels. As an example, an organization may make use of Microsoft Teams for collaboration. Integrating with the Teams channel will make your bot available in the Teams app.
In some cases, making your not discoverable is as simple as integrating it directly into a web site. For example, your company's support web site could make a question and answer bot the primary mechanism that customers interact with on the initial support page.
Is the bot intuitive and easy to use? The more difficult or frustrating a bot interaction is, the less use it will receive. Users will not return to a bad user experience.
Is the bot available on the devices and platforms that users care about? Knowing your customer-base is a good start to address this consideration. If you only make your bot available on Microsoft Teams but most of your target audience is using Slack, the bot will not be successful. It would require users to install a new and unfamiliar software application.
Can users solve their problems with minimal use and bot interaction? Although it may seem counter-intuitive,success doesn't equate to how long a user interacts with the bot. Users want answers to their issues or problems as quickly as possible. If the bot can solve the user's issue in the minimal number of steps, the user experience will be a pleasant one and users are more likely to come back to the bot again, or even help to promote the use of the bot on your behalf.
Does the bot solve the user issues better than alternative experiences? If a user can reach an answer with minimal effort through other means, they are less likely to use the bot. For example, most company switchboards use an automated system of messages and options to choose when you call. Many users continue to press 0 or some other key on the keypad in an attempt to bypass the options. The rational is to go directly to an operator or support technician.
Factors that do not guarantee success
When designing a bot, you might want to create the smartest bot in the market. Perhaps you want to ensure you have support for speech so that users don't have to type text for the interaction. Demonstrating factors such as these, may impress fellow developers, but are less likely to impress users. They could lead to user experience issues as well.
Consider the concept of simplicity. The more complex your bot is, in terms of AI or machine learning features, the more open it may be to issues and problems. Consider adding advanced machine learning features to the bot if they are necessary to solve the problems the bot is designed to address.
Adding natural language features may not always make the bot experience great. Again, the conversation returns to whether the bot is addressing the problems the user needs solved. A simple bot, that solves the user's problem without any conversational aspects, is still a successful bot.
You might also believe that using speech for bot interactions would make the bot more successful. There are many areas where it can be problematic. The ability to support every language and dialect is not possible at this time. Speaker pronunciation and speed can greatly impact the accuracy. A user interacting with the bot in language that is not their native language can create issues in recognition. Other factors where speech enabled bots can be problematic are in noisy environments. Background noise will impact the accuracy of speech recognition and could create issues for the user in hearing the bot responses. Use voice only where it truly makes sense for bot user interaction.
Considerations for responsible AI
In addition to optimizing the user experience with the bot, you should consider how your bot's implementation relates to principles for responsible AI development. Microsoft provides guidance for responsible bot development at https://www.microsoft.com/research/publication/responsible-bots, describing ten guidelines for developers of conversational AI solutions. These guidelines include:
Articulate the purpose of your bot and take special care if your bot will support consequential use cases.
Be transparent about the fact that you use bots as part of your product or service.
Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot's competence.
Design your bot so that it respects relevant cultural norms and guards against misuse.
Ensure your bot is reliable.
Ensure your bot treats people fairly.
Ensure your bot respects user privacy.
Ensure your bot handles data securely.
Ensure your bot is accessible.
Accept responsibility for your bots operation and how it affects people.
Designing Conversation Flow
The conversational flow in a bot, deals with how a user interacts with the bot as a sequence of activities. You design your conversational flow through the use of different libraries in the Bot Framework SDK. These libraries provide different options for constructing the flow of the conversation.
In all but the most simple cases, your bot will likely make use of dialogs to implement multi-turn conversations in which the bot gathers information from the user, storing state between turns. Commonly, a bot interaction begins with a root dialog in which the user is welcomed and the initial conversation established, and then child dialogs are triggered.
A flow of dialogs
It may be useful to compare the flow of interactions in a “traditional” application to that of a bot. Consider a pizza ordering application.
In a traditional application, users tend to think of the interactions as a series of “screens” or "pages". For example, on a pizza ordering web site, the user may start on the Home screen. Next, the user may select an option to view the available pizza options, moving the application on the to Select Pizza screen, where the user can select and customize a pizza. Finally, the user may decide to checkout, taking them to the Place Order screen where they can provide payment and delivery details.
A bot might follow a similar sequential pattern in which each “screen” is replaced by a dialog that gathers the required information before moving the user along to the next stage.
The important thing is to consider the purpose of your bot - what should it help the user achieve? Then design a conversation flow based on dialogs that will gather the required information and get to a resolution efficiently.
Designing for Interruptions
In the previous topic, you learned about the importance of designing a conversation flow based on dialogs. There are many kinds of dialog, depending on the specific type of conversational interaction you want to implement. For example, you can use a waterfall dialog to guide the user through a sequential series of activities, or for greater flexibility you may want to use an adaptive dialogs that can better handle unexpected flow as an interruption to the programmed flow of the conversation.
For example, our pizza ordering customer might be in the Place Order dialog, ready to place an order; and then decide to add another pizza, change the selected pizza size or toppings, or cancel the order altogether and start again.
To handle this kind of situation, you can implement an adaptive dialog that enables you to handle the interruption and redirect the flow of the conversation, maintaining state so that relevant information that has already been gathered can be retained; or in some cases, restart the dialog (or the entire conversation), resetting state as appropriate.
Designing the User Experience
An important consideration for the user experience is how you present the bot and its components to the user. You can implement the following features into a bot:
text - a typical interaction that is lightweight and involves presenting text to the user and having the user respond with text input
buttons - presenting the user with buttons from which to select options. In a pizza order bot, you might decide to use buttons to represent the pizza sizes available. They are a visual way to represent choices to users and add more visual appeal when compared to text
images - using images in the bot interaction adds a graphical appearance to the bot and can enhance the user experience
cards - allow you to present your users with a variety of visual, audio, and/or selectable messages and help to assist conversation flow
There are some considerations to be aware of when it comes to adding these features. Different channels will render each of these components differently. If a channel doesn't support the feature, the user experience can be degraded due to poor rendering or functional impairments.
Text
Text input from users is parsed to determine the intent. The ability to add natural language understanding to a bot is possible. Careful consideration around language understanding is important. One of the main reasons concerns how difference users will respond to a question. For example, your bot would ask "What is your name?". Users might respond with their name, such as Terry. The user may also respond with a phrase, “My name is Terry”. If you want to personalize the conversation with follow-up prompts including the user's name, your bot logic needs to parse the response and isolate the name from the rest of the text.
Careful planning could reveal a better design option where the bot is specific in the prompt. Your bot could prompt the user with "What is your first name?". It doesn't completely eliminate ambiguity but it leans toward a more appropriate response that may not require extensive parsing logic.
Your bot can integrate different cognitive services to aid in language understanding, keyword, or phrase detection, as well as sentiment analysis. These features make you bot more “intelligent” but they also lead to response time delays if too many services are integrated for each response. Essentially, the less processing required on the user input, the less chance for misinterpretation or bot performance. The following are recommended considerations for text input, from Microsoft.
Whenever possible, ask specific questions that will not require natural language understanding capabilities to parse the response. It will simplify your bot and increase the success your bot will understand the user
Designing a bot to require specific commands from the user can often provide a good user experience while also eliminating the need for natural language understanding capability.
If you are designing a bot that will answer questions based on structured or unstructured data from databases, web pages, or documents, consider using technologies that are designed specifically to address this scenario rather than attempting to solve the problem with natural language understanding.
When building natural language models, do not assume that users will provide all the required information in their initial query. Design your bot to specifically request the information it requires, guiding the user to provide that information by asking a series of questions, if necessary.
Speech
You can design your bot to take advantage of speech input and output. You may decide that your bot application needs to support speech if it will be accessed from devices that do not contain keyboards or monitors. You may also design your bot for users with differing abilities to interact with computing devices.
Using speech will require your bot to interact with the Speech cognitive services to transcribe the spoken word to text, for actions by the bot, and then synthesize the text responses to speech as the output.
Rich user controls
Buttons, images, carousels, and menus are examples of rich user controls. The advantage to using these types of controls with your bot are;
provide a more guided experience with the bot.
emulate an application. Users are familiar with using applications on their computers or devices so it makes the bot use more “natural”.
presents the user with discrete choices resulting in less ambiguity and misinterpretation by the bot's logic.
ease of use on mobile devices where typing text is not optimal or less-preferred by users.
Cards
Cards allow you to present your users with a variety of visual, audio, and/or selectable messages and help to assist conversation flow. Cards are programmable objects containing standardized collections of rich user controls. An advantage of cards is that they are recognized across a wide range of channels. Examples of cards include:
Adaptive cards: An open card exchange format rendered as a JSON object. Typically used for cross-channel deployment of cards. Cards adapt to the look and feel of each host channel.
Audio cards: A card that can play audio files. This card could be helpful in a bot that interacts with users who have visual impairments.
Animation cards: This type of card can play animated GIFs or short video files, for example to depict actions or status indicators.
Hero cards: A card that contains a single large image, one or more buttons, and text. Typically used to visually highlight a potential user selection.
Thumbnail cards: A card that contains a single thumbnail image, one or more buttons, and text. Typically used to visually highlight the buttons for a potential user selection.
Receipt cards: If users are able to purchase items with your bot, you can use a Receipt card to provide a transaction record for the user. The receipt can contain the items purchased, unit price, taxes, and totals.
SignIn card: A card that enables a bot to request that a user sign-in. It typically contains text and one or more buttons that the user can select to initiate the sign-in process.
SuggestedAction card: The SuggestedAction card gives the user a discrete set of options from, which to choose, but is also context aware. The actions presented are related to the next action the users needs to take and not generic in nature. The card disappears once any of the suggested actions is selected.
Video card: A card that can play videos. Typically used to open a URL and stream an available video.
Card carousel: A horizontally scrollable collection of cards that allows your user to easily view a series of possible user choices.
Recommendations for choosing the experience options
The following table highlights some considerations for designing the user experience through choices on the elements your bot uses. The table is not exhaustive but offers some insights around decision making for a good user experience.
Bot Scenario | User Experience Aspects | Rationale |
Pizza Order Bot | Text, SuggestAction, Adaptive card, Receipt card | Text can be used for initial greeting and some prompting as well as input for special instructions. Using the SuggestAction card can help constrain the user to legitimate choices. The Adaptive card could be used to present the final order with details and an image of the ordered pizza. Finally, the Receipt card can be used to provide the user with the order receipt for their records. |
Flight Booking | Text, SuggestAction, Adaptive card | Text input an allow the user to enter items such as destination city and number of passengers. Use the SuggestAction card to display a list of acceptable airports for a destination. The Adaptive card control can be used to display the flight details for the user to verify before making the purchase. |
Sporting Events | Hero card, Adaptive card, SuggestedAction | The Hero card can be used to display a list of sporting events for the user's location. It can display graphics representing team logos or perhaps a seating chart for the user to select from. The Adaptive card can serve as a visual validation of the seats ordered and event details. Users could print the Adaptive card layout as a proof of purchase as well. You could use the SuggestAction card to constrain choices to available sections, number entries ticket quantity, and event dates. |
Many of these bot scenarios could leverage some of the other controls listed in this section. The user experience design is up to you. Apply the conversation flow and navigation principles that you learned earlier, along with these rich controls, to create a bot that users will want to interact with.
Bot Framework Composer
Bot Framework Composer is a visual designer that lets you quickly and easily build sophisticated conversational bots without writing code. The composer is an open-source tool that presents a visual canvas for building bots. It uses the latest SDK features so you can build sophisticated bots with relative ease.
Using the Bot Framework Composer presents some advantages when compared to creating a bot with the Bot Framework SDK and writing code.
Use of Adaptive Dialogs allow for Language Generation (LG), which can simplify interruption handling and give bots character.
Visual design surface in Composer eliminates the need for boilerplate code and makes bot development more accessible.
Time saved with fewer steps to set up your environment.
Composer bot projects contain reusable assets in the form of JSON and Markdown files that can be bundled and packaged with a bot's source code. The files can be checked into source control systems and deployed along with code updates, such as dialogs, language understanding (LU) training data, and message templates.
LAB
One of the most common conversational scenarios is providing support through a knowledge base of frequently asked questions (FAQs). Many organizations publish FAQs as documents or web pages, which works well for a small set of question and answer pairs, but large documents can be difficult and time-consuming to search.
QnA Maker is a cognitive service that enables you to create a knowledge base of question and answer pairs that can be queried using natural language input, and is most commonly used as a resource that a bot can use to look up answers to questions submitted by users.
In this lab, we will be using the Managed QnA Maker, which is a feature within Text Analytics.
Comments
Post a Comment