The truth is, Artificial Intelligence has been around for a while. It was only after the launch of ChatGPT in late 2022 that it became popular, and conversations intensified. Like most technological features, Artificial Intelligence is a double-edged sword. It comes with many perks and disadvantages, depending on several factors. Today, we’ll be sharing 7 scary AI breakthroughs.

1. Predicting Image Geolocations (PIGEON)

In 2023, three Stanford University Students created the PIGEON AI program that predicts image geolocations. Initially, PIGEON was built to identify the location of images from Google Street View, but it evolved to accurately pinpoint the locations of personal photos.

PIGEON poses some serious privacy risks because anyone with access to this tool can identify another person’s location without consent. The platform can also be used for corporate tracking, government surveillance, and outright stalking. Due to these ethical concerns, the three Stanford University students decided not to release PIGEON to the public.

2. Realistic Deepfakes for Rigging Elections and Fake News

AI has taken electoral malpractice to a whole new level, especially with the invention of hyperrealistic deepfakes. Deepfakes are AI programs that can generate highly realistic human faces. This technology is so good that people can no longer differentiate real from fake media as they can also be embedded in live video feeds.

Of late, these deepfakes are being used to rig/influence general elections, with the recent one in Bangladesh being a perfect example. On social media, there have been numerous deepfake videos being uploaded of politicians sharing the wrong message. As such, experts are anticipating the same interferences in the coming general elections in the United States.

Besides that, deepfakes can also be used to spread fake news and propaganda. Now that we’re living in the digital age, news websites are fighting for traffic and viewership to the extent they are willing to use AI deepfake technology to create fake stories. With this technology, unethical journalists can quickly create fake articles and stories. The fact that no tools in the public domain can be used to distinguish whether news are true or false makes it more scary.

3. Q* Being Smarter Than Humans

OPENAI can be considered “the face of Artificial Intelligence,” especially after it recently launched Q*, pronounced Q-star, which is believed to be smarter than humans in almost every aspect. While it’s true that AI is still lagging in many sectors, such as creativity, empathy, and context-based reasoning, which humans excel at, the gap is reducing by the day.

If anything, the Q* AI breakthrough has shown the potential to be better than most professionals, such as scientists and intelligence experts. Although this is not necessarily a bad thing, there is a possibility that it can be weaponized to create deadly diseases and even execute mass disinformation. The training of Q* has been closely linked with Sam Altman’s dismissal and reinstation as OpenAI’s CEO.

4. AI Killer Drones

As countries worldwide are rushing to improve their internal and external security, the US government endorsed the responsible use of AI in the military in 2022. However, the problem with AI is that it lacks human input and can misinterpret situations. For instance, in 2021, a UN report indicated that an AI drone may have attacked soldiers in Libya without seeking human input.

Besides such risks, the military still uses AI-powered drones to track and identify targets in war. It’s not just drones; AI is also being used to create chemical weapons and killer robots. In the Ukraine-Russia war, Russia launched an AI drone called the Zala KYB-UAV, which can identify and kill targets without being controlled by a soldier.

5. AI Influencers Who Lack the Moral Compass

Influencer marketing is at an all-time high – Brands spend thousands of dollars hiring celebrities to endorse their products or services. This form of marketing has led to the invention of AI influencers, and even though these programs make work easier, they come with some risks.

AI influencers look so real that it can be difficult for viewers to identify them. To make matters worse, they are not guided by ethics or morals. Therefore, they may go to extraordinary lengths to sell or advertise their owners’ products or services. The other problem with AI influencers is that they blur the lines between real and virtual personalities. Consumers have a right to know whether they are interacting with real or fake AI influencers.

6. AI Text-to-Speech Technology to Commit Fraud

Like AI-generated videos, text-to-speech technology comes with its fair share of risks. First, it can be used to create the deep fakes discussed above. Therefore, it can defame, spread misinformation, and shape public perceptions.

Also, text-to-speech technology can be used to impersonate people and commit fraud. Scammers have recently been deploying the technology to fake phone calls from family members or financial institutions like banks.

7. AI Takeover of Human Jobs

So far, AI has already taken over human jobs in customer service, data analysis, and writing. Many companies and organizations are now replacing human workers with robots that are more efficient and affordable. By 2030, AI could lead to mass unemployment as the technology keeps advancing, leading to an economic crisis.

Categorized in:

Science, Technology,

Last Update: June 11, 2024