Mark Lanterman | June 16, 2023
ChatGPT has been dominating our newsfeeds for the past few months, and it makes sense. With the unprecedented rise of ChatGPT, organizations are trying to strike a balance between the undeniable benefits and enormous potential for risk. As far as cyber risk goes, artificial intelligence (AI) has been described by many as a double-edged sword.
On the one hand, burgeoning AI capabilities hold a lot of promise for automating cyber-security practices. From technical vulnerability scanning and detection to mitigation and response, current advancements in AI are continuing to positively shape security postures. However, on the other side of the coin is the unfortunate reality that cyber criminals are equally energized by the conveniences afforded by AI. Think of phishing scams that rely on digital communication to trick users into urgently clicking a link, initiating a wire transfer, or sharing confidential information. Oftentimes, the cyber criminal will exploit the human element as the easiest point of vulnerability.
AI applications, particularly ChatGPT, can be used to quickly create any number of sophisticated phishing messages, intended for any number of targets. ChatGPT has also been shown assisting cyber criminals to create malware. 1
Cyber security experts are required to be adept risk managers, prioritizing technology benefits and risks based on the goals and values of their organizations. From this perspective, organizations need to consider the risk impact of any new technologies that they plan on incorporating. This is proving to be a tricky aspect of implementing ChatGPT for many.
While AI is not new, the recent rise of ChatGPT and the potential problems surrounding its use are only beginning to unfold. Organizations should be aware of how cyber criminals can take advantage of AI to improve the nature of their attacks. For example, to combat the increasingly sophisticated phishing attempts enabled by AI, it is recommended to take advantage of the same capabilities in filtering out AI-generated emails. An article from Harvard Business Review states:
Ideally, IT infrastructure would integrate AI detection software, automatically screening and flagging emails that are AI-generated. Additionally, it's important for all employees to be routinely trained and retrained on the latest cybersecurity awareness and prevention skills, with specific attention paid to AI-supported phishing scams. 2
Apart from the risks associated with improved phishing campaigns and cyber attacks, many organizations are also prohibiting the use of ChatGPT due to the potential for internal breaches. Companies like Apple and Amazon have banned employees from using ChatGPT over concerns regarding "how services like ChatGPT and Google's Bard store data shared with them on servers. The other complication stems from the fact that most chatbots and AI services rely on user inputs to train their models and may accidentally serve other users a company's proprietary data." 3
Using tools like ChatGPT comes with the risk of the technology itself being leaked, but there is also an additional risk stemming from the internal threat. If strict guidelines are not communicated to employees, confidential data could be shared in AI conversations.
On May 16, 2023, OpenAI CEO Sam Altman testified before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing, urging regulatory intervention and governmental cooperation to mitigate the risks of AI. 4 He explained that, while AI comes with a lot of promise, there is also an equal amount of risk that could stem from misuse. On a smaller scale, organizations will be tasked with developing and regularly reassessing their AI plans as part of their cyber security programs.
As with any new technology, cyber security experts need to consider it in relation to their organization's risk appetite. From accounting for a possible increase in advanced phishing emails to establishing clear rules for employee usage to avoid unintentional data leaks, threats should be consistently assessed and mitigated. Advancements in AI can also be used proactively as a cyber-security tool, being one part of a robust cyber security program.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.
Footnotes