Host Jeremy Snyder and Diana started by discussing the organization WiCyS, short for Women in CyberSecurity, which promotes the recruitment, retention, and advancement of women in cybersecurity. We also discussed using Artificial Intelligence and Machine Learning in cybersecurity, how they can help analysts, managers, and those in charge of tracking and hunting down attacks, and further explored the implications of this technology, such as its application in healthcare, national security, and financial services, and how data provenance can be used to protect organizations.
In the related blog below, we quickly examine the potential threats that AI and ML present in cybersecurity.
Are AI and ML Threats to Cybersecurity?
We’ve all heard of Artificial Intelligence (AI) and Machine Learning (ML) for a while, but truth be told, not many of us have had the opportunity to work (or experiment and play) with either.
Not till ChatGPT was made publicly available, that is.
Overnight, we see AI being the most talked about topic in town. It’s everywhere — from physical locales like markets to cafes to online communities like Facebook, Twitter, and LinkedIn.
So far, the discussions have been encouraging, or at least that’s what I’ve seen, heard, and read. Most are talking about using AI and ML for the betterment of the human race, including cybersecurity.
However, it is unavoidable that both sides of the moral divide will soon utilize new technologies, which are neutral in and of themselves. The bad news is that it’s happened.
There were at least two news articles about that in February 2023. Here’s one reported by BleepingComputer, and some hackers have even hacked ChatGPT itself to bypass some of the chatbot’s restrictions to sell illicit services in underground forums.
In this environment, it might be helpful to look at some ways AI and ML can be used to launch attacks, so we are aware of the possible attacks heading our way.
If I were a hacker and wanted to maximize my income with as little labor as possible, I would use AI and ML to launch automated attacks involving AI-powered bots to scan, test, and exploit vulnerabilities in your company's network.
It’s, therefore, not surprising that today, with the increased public availability of such tools, threat actors are using AI to create more sophisticated bots to identify and exploit vulnerabilities faster and more efficiently.
For example, the Mirai botnet used AI to scan the internet for unsecured IoT devices and use them to launch distributed denial-of-service (DDoS) attacks.
Social Engineering Attacks
Social engineering attacks trick us into giving up sensitive information or granting access to systems.
Here, ML and AI can generate compelling phishing emails that mimic the style and tone of legitimate emails, while AI-powered chatbots can impersonate bosses, immediate superiors, fellow employees, or customer service representatives, increasing the likelihood that a victim will trust and provide sensitive information to the attacker.
When it comes to phishing emails, ML algorithms can analyze your social media activity to create a highly targeted phishing email or text message that appears to be from a trusted source. The message may contain personalized details that make it difficult for you to distinguish it from legitimate communication.
In fact, I shudder to think of how complex and more convincing these attacks will be in the future if they are launched in conjunction with deep fakes. The DeepFake technology has proven capable of creating fake videos and audio recordings that can deceive even experienced security professionals.
Some programmers have already demonstrated how ChatGPT is capable of helping them generate useful code, some sophisticated and useful enough for even establishing a startup!
Similarly, threat actors can also use ML and AI to develop highly sophisticated malware that can bypass traditional security measures. ML algorithms can be used to analyze security systems to find vulnerabilities that can be exploited or to create new malware that can evade detection.
For example, an attacker could use ML algorithms to generate highly targeted spear-phishing emails tailored to individual employees within a company. These emails may contain malware designed to evade traditional security measures by disguising themselves as benign files or processes, while AI-powered malware can also be used to detect and bypass security measures that are designed to detect and prevent such attacks.
The DeepLocker malware uses AI to identify specific targets based on biometric information such as a person's face or voice. Once it has identified a target, it can remain dormant until it recognizes the target, at which point it can launch an attack.
Denial of Service Attacks
Denial of Service (DoS) attacks are a common tactic cybercriminals use to overwhelm a company's servers and prevent legitimate users from accessing their systems.
ML and AI can be used to create more sophisticated DoS attacks that can adapt to changes in your company's defenses. For example, an attacker could use ML algorithms to analyze your company's defenses and develop an attack specifically designed to evade those defenses.
Distributed Denial of Service (DDoS) Attacks
Threat actors can also use ML and AI to launch more effective Distributed Denial of Service (DDoS) attacks.
For example, attackers can use ML algorithms to predict the best time to launch an attack and identify the most effective attack vectors against a specific target. AI can also identify vulnerabilities in a target's network that can be exploited to launch more effective DDoS attacks.
We can also look forward, with some dread, to more sophisticated data breaches.
For example, attackers can use ML algorithms to analyze your network traffic and identify vulnerabilities that can be exploited to gain access to sensitive data. AI can also bypass security measures by using advanced techniques such as deep learning to identify patterns in a company's data that can be used to evade detection.
Advanced Persistent Threats (APTs)
APTs are cyber attacks that employ a persistent and targeted effort to gain unauthorized access to your company's systems. With ML and AI, threat actors may now be able to develop highly sophisticated APTs that can remain undetected for long periods.
For example, an attacker could use AI to analyze your company's network traffic to identify patterns and develop a targeted attack tailored to your company's vulnerabilities.
ML and AI can also launch fraud attacks against companies.
For example, a threat actor could use ML algorithms to analyze your company's financial transactions to identify patterns that indicate fraud. Once they have identified a vulnerability, they can use AI to develop a more sophisticated attack to bypass your company's fraud detection systems.
ML and AI have demonstrated that they are effective tools for cracking passwords, even long ones. Sure, longer and more complicated passwords may take some time to crack, but I don’t think everyone uses them.
Threat actors are now using AI to automate this process by using machine learning algorithms to predict passwords based on common patterns and personal information. For example, HashCat, a password-cracking tool, uses machine learning algorithms to analyze password patterns and optimize its guessing strategy.
On the upside, using passwords alone for authentication and verification will soon be a thing of the past. Multi-factor authentication may quickly be a de facto standard and requirement in all organizations, and that’s actually a good thing.
Like most technologies, Machine Learning and Artificial Intelligence are powerful tools that have transformed many industries and lives, but they can be used for good and nefarious purposes.
There is no doubt that threat actors will and are already leveraging these technologies to launch more sophisticated and targeted cyber attacks on organizations.
Therefore, you must start taking steps to protect yourself against them, such as implementing advanced security measures like AI-based security tools that can detect and respond to AI-based threats in real time.