Evil intelligence! 5 big AI DANGERS that can really hurt you badly



In many ways, the introduction of OpenAI’s ChatGPT and other GPT-based tools have become a watershed moment for technology. Now, artificial intelligence can not just understand basic commands and respond to it, it can also think and generate diverse sets of information and give solutions to problems. This makes AI powerful, perhaps too powerful for the good of humanity. And a lot of people are raising their voices around this concern. Recently, Future of Life Institute started a petition to pause all AI activities till a better regulation framework can be created. Some of the notable signatories include Elon Musk, Apple cofounder Steve Wozniak, Turing Award-winning AI pioneer Yoshua Bengio among others. So, what exactly are the risks people are worried about? Check the 5 threats of AI that you too should be aware of.

5 risks of AI

1. The risk of misinformation: The content-generating AI chatbots are capable of churning out large amounts of texts in a very short time, beating the ability of humans by a large margin. But in a world which is actively dealing with issues like fake news and misinformation, this capability can be very dangerous. Bad actors can take this and flood the internet with misinformation. The effect will be exponential on social media platforms which are infested with bots. This may already be happening and may be depriving you of knowing the truth on certain critical topics.

But that’s not even the full extent of it. We also have AI platforms which generate photos and videos, although they are still at an infant stage. With higher rendering ability and better language models, they will be capable of creating entirely fake photos and videos to supplement the misinformation. We are already witnessing the early stages of this problem in the form of deep fakes.

See also  Amazon Great Indian Festival 2022 Starts Tomorrow: Everything You Need to Know

2. Job automation and unemployment: Another issue which has been argued upon by many is that massive layoffs are possible with the rise of AI. AI can make quite a few jobs redundant. Data entry, customer service, proofreaders, paralegals, bookkeepers, translators, copywriters, social media managers and more can be replaced with the help of artificial intelligence. A report by Goldman Sachs has revealed that AI can claim as many as 300 million jobs globally.

3. The question around privacy: This is probably the least discussed risk from an AI, but it has significant implications on people’s lives. Many big corporations contain a large amount of user data as a result of the services they provide. Many such companies even had to answer charges on erosion of privacy. There is a fear that AI can process these huge amounts of data and create terrifyingly accurate user profiles to not only target ads but also mimic patterns of users which is most likely to convince the user of making a purchase.

In a world where this happens, consumers would lose their ability to make choices for themselves after being manipulated by the AI.

4. Weapons and wars: Many military equipment and systems, such as drones and missile systems, today are controlled by AI. But this is a very limited usage where AI is not allowed to bypass strict orders given to it. But if the role of AI were to be increased to the point that it can make decisions around life and death situations without human input, it will create a massive problem that can easily escalate to major wars.

See also  Windows 11 Photos App Update Brings iCloud Photos Integration: Report

5. The security threat: The UK government, in 2020, commissioned a report which highlighted that there was a necessity of AI in cybersecurity defenses to detect and mitigate threats at a higher speed than humans are capable of. While the intention is good, this also leads to a situation where AI is in control of human security online and gets to decide what is and is not a threat and then act on it to mitigate the threat. The question that arises is whether discarding scrutiny in favor of efficiency is a right move.

As researchers work on developing artificial intelligence further, regulators have a role to play in determining where and when to restrict the access of AI.


Source link