The Story of Insane and Offensive AI Chatterbot

Follow TheNotes to get each article directly in your inbox for free.


There are several interesting things in this world and we bring you one of them daily. So, don't forget to follow 'TheNotes' by your email. Today I am sharing with you the story of one of the most horrible AI chatterbot developed by Microsoft, which gets shut down after 16 hours of its launch on Twitter. Let's go deeper.


The Story of Insane and Offensive AI Chatterbot
The Story of Insane and Offensive AI Chatterbot



Tay was an artificial intelligence chatterbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.


Why "Tay" became insane?

Tay is based on Machine Learning. Machine learning works by developing generalizations from large amounts of data. In any given data set, the algorithm will discern patterns and then “learn” how to approximate those patterns in its behavior. Whatever Tay learns is based on whatever data it gets. The malicious input results in the malicious output.


The End Notes

Tay was an experiment at the intersection of machine learning, natural language processing, and social networks. Well, that's all in this article. I hope you liked it. If you have any questions or suggestions, share them in the comment section below. Have a good one.

Cheers!

Post a Comment

1 Comments

  1. Anonymous03:58

    Best online casino in the UK - WorldCon2019
    The largest UK casino list that includes casinos, 온라인 카지노 게임 online 우리 카지노 주소 casinos and poker rooms. The world's 온 카지노 먹튀 largest 카지노 바카라 selection of online slots, live dealer 바카라 조작 games,

    ReplyDelete

Please share your views.

Close Menu