Meta takes a risk and releases BlenderBot, a next-generation type of AI chatbot

Meta takes a risk and releases BlenderBot, a next-generation type of AI chatbot

Conversational AI is one of the hottest trends of the decade. As a form of advanced artificial intelligence, it enables people to engage in a real-life dialogue with a computer application. Chatbots are an advanced form of conversational AI, and Sophia is today’s most well-known humanoid robot. As AI technology advances, Meta’s AI research lab joins the competition.

The company works on developing a next-generation type of chatbot. Currently, under development and only available in the US, BlenderBot 3 is capable of engaging in chatting with people in real life. It seems that the virtual assistant can talk spontaneously about food recipes, health and other general topics.

A next-generation LLM (large language model) bot

Designed to answer basic questions, just like Google, Meta’s plans for the project are quite ambitious. The prototype is an LLM (large language model) bot trained on enormous datasets of text. According to Meta, over 70K demo conversations were already collected from people. In search of statistical patterns, BlenderBot is capable of generating languages.

Although the system has significant flaws currently being addressed by Meta, such as training data bias and invented answers to people’s questions, it comes with tremendous benefits. The ability to have a conversation about certain topics by searching for information online is just one of them. For accuracy of the information, users can check BlenderBot’s source upon being given a response.

Solving major challenges linked to language models

Following the initial pilot launch, Meta aims to collect user feedback and address major concerns related to LLMs in general. Users can flag inaccurate responses, enabling Meta to work on eliminating culturally insensitive comments, vulgar language, and slurs from the system. Historically, launching prototype chatbots hasn’t been a smart move from tech companies seeking to impress.

When Microsoft released Tay on its Twitter platform, the chatbot was coached by users to regurgitate a series of blasphemes and racist statements. 24 hours later, Microsoft had to disable Tay due to the controversy created. Even though things have changed since Meta believes its AI can do a lot better. As a precautionary measure, the company added numerous safety layers to the software to prevent BlenderBot from repeating Tay’s mistakes.

LLM chatbots are predisposed to controversy

Large language chatbot modules, in general, are considered controversial. Following the Tay event, companies were afraid to release them publicly out of fear. As opposed to Tay, which could handle real-time user interaction, BlenderBot is a static type of module. What this means is that conversations are remembered but the data collected will be used only to add improvements to the system.

Most chatbots available today have limited capabilities. They’re task-oriented and the best examples are customer service bots. The dialogue tree is preprogrammed, meaning that users don’t get their issues solved and end up requesting a human live agent to further discuss their problems.

Meta’s mission with BlenderBot is to develop a more accurate system capable of conducting free-ranging conversations. The company seeks to make dialogues natural and to do that users need to pinpoint the errors. To push the research plan even further, Meta declared that the training dataset and code would be published publicly on the web in the hopes of perfecting the technology with the help of the global coding community.