You would, by now, undoubtedly have heard of and maybe even tried out ChatGPT for yourself.
Yes, that Artificial Intelligence (AI) chatbot everybody is talking about.
Everyone is jumping on the bandwagon at this time. Various content creators on YouTube and other social media platforms have been lauding the bot, and some have even used the bot to create pretty nifty applications. A presenter even used it to write code for his startup idea.
It does seem that possibilities are endless.
Is ChatGPT a boon or bane for cybersecurity?
How about cybersecurity? Can ChatGPT be used to help you better secure your crown jewels and your organization?
On the flip side, what new dangers are we looking at and facing with the advent of new technologies like ChatGPT?
Let’s look at these together, starting with some advantages ChatGPT and AI bring to the table.
How ChatGPT can be a boon for cybersecurity
Phishing campaigns are standard today, and most are potentially successful campaigns that malicious actors can use to gain access to your sensitive data.
In November 2022, Security Magazine reported 255 million phishing attacks in the first 11 months of last year. That already was a whopping 61 percent jump in phishing attacks from 2021.
This is hardly surprising, given that most organizations today are more aware of cyber threats and have implemented various instruments such as CSPMs like Horangi WardenHorangi Warden and other measures to safeguard their data.
However, malicious actors and everyone else know that humans are the weakest link in the security chain. Ultimately, humans hold the keys to access sensitive data, and this access is, at best, protected by a password and a multi-factor authentication system.
However, this approach has been demonstrated to be susceptible. A series of phishing attacks in late 2022, where human targets were inundated with phone calls to check and reveal their 2FA codes, worked pretty well — the malicious actors were ultimately given access to sensitive data. They just needed to apply pressure and some oodles of patience.
Phishing messages are also getting more complex. Some malicious actors even go as far as to study a target’s interests and personal life to write very targeted phishing emails to gain access to sensitive data. After all, the payload was worth the time and effort!
We human beings are susceptible to falling victim to these phishing attacks all the time. Yes, you may scoff at the idea, but I’m sure you had some close calls.
Think of instances where you decided to take action on an email request or believe something you read online because you were tired and didn’t check who the sender was, assumed who it was from, or was too busy to do so.
ChatGPT learns from large language models and can use what it has learned to help organizations recognize and flag phishing messages before they even land in the recipients’ inboxes.
This can significantly reduce the chances of a successful phishing campaign, but there are pitfalls. For example, some legitimate emails may be flagged as phishing emails, like how you sometimes have to tell GMail that an email is not spam.
Of course, as with most technologies, ChatGPT will only improve as end users help it learn which emails are legitimate and which aren’t.
Cybersecurity professionals can also leverage ChatGPT to train phishing detection systems to recognize patterns and language associated with these attacks. This can significantly improve the efficiency and effectiveness of phishing detection systems.
Now, you might have noticed ChatGPT’s propensity for learning and thought, hey, this could also be used in nefarious ways, such as crafting these phishing emails. Well, you are very observant, and you aren’t wrong! We’ll discuss that in the section on how ChatGPT can be a bane for cybersecurity.
Having excellent and reliable threat intelligence is integral to a comprehensive threat detection and mitigation strategy.
Most of the time, threat intelligence comes from notable intelligence providers who may or may not be privy to the unique threats that your organization faces. In addition, your organization may have special security arrangements that the general market does not cater to.
ChatGPT is a gift for such scenarios.
ChatGPT can generate large amounts of data that can be used for cyber threat intelligence and even simulate malicious actors in their attempts to penetrate your cyber defenses.
Even if yours is more of a generic organization, you can still use ChatGPT to better understand the TTP, i.e., Tactics, Techniques, and Procedures that malicious actors may use to attack your organization.
Furthermore, using these newly-acquired insights, your security team can now relook at your incident response plans, identify new vulnerabilities you never knew existed or could exist, and ultimately work with your cybersecurity vendor to develop more effective security controls and defenses.
Quite a few CISOs I know have said this in one form or another: to understand what vulnerabilities are in your systems, sometimes you have to step into the malicious actors’ shoes and think the way they do.
It’s not a new idea.
Sun Tzu, a Chinese strategist whom I am sure you’d be pretty familiar with, wrote this during the Eastern Zhou period in ancient China between 771 to 256 BCE:
“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained, you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”
This is so applicable to cybersecurity, but how can we realize this?
Fortunately, today we can achieve that. ChatGPT can be used to help you discover new vulnerabilities in the software and systems that your organization utilizes.
For example, you can use ChatGPT to quickly generate large amounts of unique inputs that will allow you and your cybersecurity professionals to identify previously undetected and unknown vulnerabilities.
You can then use all the newly acquired knowledge and information to improve software and systems security, develop more effective security controls, or improve current security measures and practices.
Improving Incident Response
While Incident Response remains a critical component of your cybersecurity strategy, these two words are easier pronounced than executed.
Firstly, it is difficult to accurately identify what caused a breach, data exposure, or cyber incident.
Secondly, whatever was identified could be something you or anyone in your organization has never encountered.
What then follows is a flurry of googling and asking around in the hopes of precisely identifying the threat and how to mitigate it.
ChatGPT may help here, but this is a reaction, not a pre-emptive measure.
Don’t worry; ChatGPT is a great tool here because it can help you analyze large amounts of data from various sources, including network logs and system event logs, to identify patterns and trends associated with cyber incidents.
When utilized this way, ChatGPT becomes an immense asset to help cybersecurity professionals like yourself quickly and accurately identify and respond to cyber incidents, which helps to prevent data breaches and minimize the damage caused by cyber-attacks.
How ChatGPT can be a bane for cybersecurity
We’ve all lived long enough to understand and realize that everything offers advantages and disadvantages.
So far, we have looked at some opportunities ChatGPT presents for cybersecurity professionals like us. However, there are pitfalls too.
Phishing and Malware
Remember I mentioned earlier that ChatGPT is a neutral tool that can enhance your cybersecurity measures and cause untold damage by nefarious actors?
Here’s an example of how it can be executed: A malicious actor feeds ChatGPT all the information collected to quickly generate a myriad of unique and targeted messages designed to solicit sensitive information from you.
ChatGPT can generate convincing and realistic social engineering attacks that can trick anyone into providing sensitive information or taking actions that can compromise your organization.
And should the malicious actors desire, they can also use the data that ChatGPT collected to plant malware into your systems.
Furthermore, with the information they have collected, there’s a high chance that they would use ChatGPT to write the malware and make it hard to detect while causing significant damage to you and your organization.
ChatGPT can automate myriad tasks, and you might be disappointed to learn that cyberattacks are one of them.
Yes, malicious actors already automate attacks, but ChatGPT will take them to the stratosphere by allowing them to generate enormous amounts of unique attacks and thereby launch coordinated and targeted attacks on an immense scale.
Not only does this present potential economies of scale for malicious actors, but it may even increase the chances of effective attacks and make such attacks more undetectable and, therefore, more difficult for cybersecurity professionals to defend against them.
While ChatGPT presents many opportunities for cybersecurity professionals like you to augment and improve your defenses against cyberattacks, it also presents dangers and risks when used by malicious actors.
That said, ChatGPT has the potential to revolutionize the field of cybersecurity. It is, therefore, essential for you to continue keeping abreast of the capabilities of these models and develop strategies to defend against them.
There’s no clear indication whether ChatGPT will be a bane or boon for cybersecurity professionals. Only time will tell; it will also depend on which side of the fence is more innovative and quicker to adapt and utilize this new technology.
At the end of the day, as cybersecurity professionals, we must be acutely aware and remember that the key to effectively defending against new and old threats is continuous monitoring, learning, and adapting to the ever-evolving cyber landscape.