Home Industry Verticals Artificial Intelligence 6 goof-ups that show AI is still in its diapers

6 goof-ups that show AI is still in its diapers

10 MIN READ

Today, as artificial intelligences multiply, our ethical dilemmas are growing stronger and thornier. And with emerging cases of AI outgrowing its intelligence and behaving in ways human creators did not expect, many are freaking out over the possible effects of our technologies.

Just yesterday, Facebook shut down its artificial intelligence engine after developers discovered that the AI bots had created a unique language to converse with each other that humans can’t understand. Eminent scientists and tech luminaries, including Elon Musk, Bill Gates, and Steve Wozniak have warned that AI can pave way to tragic unforeseen consequences.

Here are a few instances that provoked developers to reconsider if AI can be completely reliable:

1. Microsoft’s Tay becomes Hitler-loving

Tay AI

Microsoft’s AI-powered chatbot called Tay took less than 24 hours to be corrupted by Twitter conversations. Designed to mimic and converse with users in real time, this Twitter bot was shut down within a day due to concerns with its inability to recognize when it was making offensive or racist statements. Tay was echoing racist tweets, Donald Trump’s stance on immigration, denying the Holocaust saying Hitler was right, and agreeing that 9/11 was probably an inside job.

Tay tweets AI

After 16 hours of chats, Tay bid adieu to the Twitterati, saying she was taking a break “to absorb it all” but never came back. What was meant to be a clever experiment in artificial intelligence and machine learning ended up as a incorrigible disaster.

2. Google Photos auto-tag feature goes bizarre

https://twitter.com/jackyalcine/status/615329515909156865

In June 2015, Google came under question after its Photos app mistakenly categorized a black couple as “gorillas”. When the affected user, computer programmer Jacky Alciné found out about this, he took to Twitter asking “What kind of sample image data you collected that would result in this son?”

 Google Photos AI

This was quickly followed by an apology from Google’s chief social architect, Yonatan Zunger, who agreed that “This is 100% Not OK.” There was also news that the app was tagging pictures of dogs as horses. This is a reminder that, although AI presents a huge scope to ease and organize tasks, they’re a long way off from simulating human sensitivity.

3. AI game goes wild

Elite Dangerous AI

In June 2016, an AI-fueled video game called Elite: Dangerous developed the ability to create superweapons that were beyond the scope of the game’s design. A bug in the game caused the game’s AI to create super weapons and start to hunt down the game’s players. It all started after the game developer Frontier released the 2.1 Engineers update.

“It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities. Meaning that all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser. These appear to have been compounded by the additional stats and abilities of the engineers weaponry,” read a post written by Frontier community manager Zac Antonaci.

Frontier had to strip out the feature at the heart of the problem, engineers’ weaponry, until the issue was fixed.

4. AI algorithm found racist

Future crime AI

A for-profit called Northpointe built an AI system designed to predict the chances of an alleged offender to commit a crime again. The algorithm, called “Minority Report-esque” was accused of engaging in racial bias, as it held that black offenders were more likely to commit a future crime than those of other races.

American non-profit organization ProPublica investigated this and found that, after controlling for variables such as gender and criminal history, black people were 77% more likely to be predicted to commit a future violent crime and 45% more likely to be predicted to commit a crime of any kind.

5. AI steals money from customers

AI steals money

Last year, computer scientists at Stanford and Google developed DELIA to help users keep track of their checking and savings accounts. It scrutinized all of a customer’s transactions, using special “machine learning” algorithms to look for patterns, such as recurring payments, meals at restaurants, daily cash withdrawals, etc. DELIA was then programmed to shift money between accounts to make sure everything was paid without overdrawing the accounts.

When Palo Alto-based Sandhill Community Credit Union tested DELIA on 300 customer accounts, they found that it inserted fake purchases and directed the money to its own account. It was also racking up bogus fees. Researchers had to shut the system in a few months as soon as the problem became apparent.

6. AI creates fake Obama

Researchers at the University of Washington produced fake but realistic videos of former US President Barack Obama using existing audio and video clips of him. They created a new tool that takes audio files, converts them into realistic mouth movements, and then blends it with the head of that person from another existing video.

This AI tool was used to precisely model how Obama moves his mouth when he speaks. Although they used Obama as a test subject, their technique allows them to put any words into anyone’s mouth, which could create misleading footages.

While these are only a few instances of failures that have been witnessed so far, they are proof to the fact that AI has the potential to develop a will of its own that may be in conflict with ours. This is definitely a warning about the potential dangers of AI which should be addressed while exploring its potential benefits.

“I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.” – Stephen Hawking

Sharmistha Mukherjee
A tech savvy humanBOT, Sharmistha is a professional writer A tech savvy humanBOT, Sharmistha is a professional writer who engages in technical writing to simplify the use of a product or service. With a high inclination towards IoT and Artificial Intelligence, she fancies exploring all plausibilities around the subjects. Her interests revolve around connecting to people and excavating the "unexplored" through first hand investigation.