Artificial intelligence (AI) is a problem for cybersecurity


Artificial intelligence (AI) is a problem for cybersecurity


With cybersecurity, we mean protection against digital attacks, and even in this field, the hottest topic now is artificial intelligence (AI). 

Trend Micro reveals it in its latest annual “predictions report” for 2024. 

Cybersecurity: the problem of artificial intelligence (AI)

The risks, as usual, are many and of different types, but one of the focuses that the report concentrates on is the so-called generative artificial intelligence (GenAI). 

From a cybersecurity perspective, generative AI will increase the level and effectiveness of social engineering baits produced by scammers to try to lure potential victims. 

The Trend Micro report predicts that by 2024 voice cloning will be at the center of targeted scams, in addition to already being a powerful tool for identity theft and social engineering. 

In addition to this, there are other techniques that exploit artificial intelligence such as spear phishing, harpoon whaling, and virtual kidnappings, which are just the tip of the iceberg of the role that generative AI can play within computer criminal schemes.

Although in 2023 the WormGPT based on the LLM (Large Language Model) was shut down, they predict that a greater quantity of its spawns will populate the dark web. Furthermore, scammers will always find new ways to exploit artificial intelligence for their cybercriminal activities.

Furthermore, the legislation to regulate the use of generative artificial intelligence has yet to be approved, therefore it is essential for defenders to implement zero trust policies and establish a vigilant mindset for their respective companies, in order to avoid falling prey to scams based on AI.

The problems related to blockchain

In addition to AI, other problems could also arise from private blockchains. 

The report argues that the blockchain will serve as a new hunting ground for scammers, especially private blockchains that more and more companies are turning to in order to reduce costs. 

In fact, private blockchains generally have to face fewer stress tests and do not reach the same level of resilience as decentralized and permissionless public blockchains. 

These latest ones indeed face constant attacks, and over time they face and effectively overcome so many stress tests that they end up being secure. However, cybercriminals will probably prefer less secure private ones. 

In the crypto and Web3 field, Trend Micro also identifies threats related to decentralized autonomous organizations (DAO) governed by self-executing smart contracts hosted on public blockchains.

Some problems in this regard have already been observed, for example, in those who use smart contracts as a weapon to add levels of complexity to crimes related to cryptocurrencies against decentralized finance platforms (DeFi).

The other problems

The report adds to these threats also those related to artificial intelligence/machine learning (AI/ML) and the cloud. 

Machine learning actually always has to do with AI, even if not necessarily with the above-mentioned generative artificial intelligence. 

The cloud, on the other hand, is the remote storage of data on third-party online platforms. 

The report states that, as cloud adoption becomes increasingly critical for businesses, they must look far beyond malware and routine vulnerabilities in their computer systems. 

They argue that in 2024 cloud environments will be a playground for custom-made worms to exploit cloud technologies, with incorrect configurations serving as an easy entry point for attackers.

In addition, the possible poisoning of data stored in the cloud will also make machine learning (ML) models vulnerable, and a compromised ML model can open the doors to the disclosure of confidential data, or the writing of malicious instructions, or the provision of distorted content that could lead to user dissatisfaction or potential legal repercussions.

Artificial Intelligence (AI) and scams: cybersecurity in danger

In the crypto field, the main problem probably won’t be pure cyber attacks, which have always existed and will always exist, but mainly target the most vulnerable systems.

The main problem could be scams, and scammers could greatly enhance their social engineering techniques thanks to GenAI. 

In fact, several fake videos created with artificial intelligence have already circulated in 2023, and they are hardly recognizable as such, in which unaware VIPs were seen promoting real scams. 

Reproducing the image and voice of a famous character with AI to make them say whatever you want is no longer so difficult, and the VIP can intervene to block the spread of these videos only when they have already been published and distributed online. 

The key point, still widely underestimated by many, is not to recognize whether a digital content is authentic or not, but to verify which was the primary source that first published and distributed it online. 

If the primary source cannot be found, or if it is found to be an unreliable source, that content should be ignored without even bothering to examine it to determine if it is false or not. Only if the primary source is found and is reliable is it worth examining, and if the source is credible, this alone may be enough to imagine that it could be true. 

However, this is a process that almost always ordinary people do not do, and unfortunately nowadays a good part of information professionals do not do either, whose main job should be precisely this (identifying and verifying sources). 



Source link