The AI that has the world buzzing, known as LaMDA (Language Model for Dialogue Applications), is a system that scrapes reams and reams of material from the internet to create chatbots — AI robots built to interact with people — by employing algorithms to answer queries. What makes is special is that it makes the chatbot responding sound as fluid and natural as possible, so much so that you would even mistake it for a real human.
What makes it special?
It’s one thing to have a robot answer specific questions by providing information. But an AI like this can talk about a broad range of topics and keep the conversation going. Google has always used machine learning to improve their processes. Something as unique as human language is really hard to replicate.
Advancements in these and other areas of AI have made it easier and easier to organize and access the massive amounts of data given by the written and spoken word over time.
However, there is always space for advancement. Language is incredibly versatile and subtle. It might be literal or metaphorical, flowery or straightforward, imaginative or informative. Language is one of humanity’s greatest tools — and one of computer science’s most challenging riddles — because of its versatility. So is LaMDA the answer to this riddle, or just a new puzzle piece?
Keep the conversation going
While LaMDA does not cover all the AI related areas, it does wonders with conversation. It’s verbal abilities have taken years to develop. It’s based on Transformer, a neural network architecture developed by Google in 2017. This design results in a model that can be taught to read a large number of words (a phrase or paragraph, for example). It pays attention to words and figures out in what way they relate to one another. What it then does is it predicts what words will appear next.
Because LaMDA’s main feature is dialogue, it picked up on numerous of the characteristics that separate open-ended conversation from other kinds of language during its training. Sensibility is one of those nuances. The response is not a simple information feedback, it has to make sense from a conversation standpoint.
There is still a lot of testing to be done, as Google wants to see whether LaMDA can be witty and answer in an unexpected and insightful way, beyond the obvious responses.
While people may worry that artificial intelligence can become sentient and start a movie like robot rebellion, the reality is much simpler. There are possible concerns that LaMDA may be biased or reciprocate some passive aggressive responses or even hateful speech to the person they are ‘speaking to’. That is why tests are still being conducted to see how much of the dialogue is mirrored back and Google really wants to make sure all the responses are factual. The last thing they want is a misleading source of information that someone will trust, while relying on them to be an infallible machine.