Of all the C-suite positions grappling with the practical adoption of GenAI, CISOs are in a particularly precarious position.
Every day, a dozen new tools (or rebranded ones) with black-box models are being sold to budget holders across the enterprise. The result is that external threat actors have a powerful new weapon at their disposal, resulting in higher stakes for the security arms race.
Attack activity grows in volume and sophistication as the factors of accessible compute, global digitization, and, yes, GenAI creates multiplying factors. At the same time, GenAI is also part of the solution given a strained talent pool of security experts.
In this article, we explore both sides of the coin: the risks and benefits that GenAI poses for CISOs. We start with a brief overview of opportunities that security and IT leaders will need to keep top of mind over the next 12 months. We then narrow down into discussions of the threat landscape, in addition to defensive tooling.
GenAI code improves developer productivity
It’s important to set the context of why your organization will want to use GenAI, in the first place.
GenAI, in short, allows humans to more rapidly create the digital artifacts they need. For the cases most relevant to information security, we’ll focus on GenAI’s ability to generate code.
Estimates of productivity gains among developers vary widely depending on the source, with as low as 10% all the way up to 43% for “lower performers”. This variation is likely driven by the type of organization and the coding tasks at hand. Keep in mind that even with a low-bound 10% improvement to engineering-wide development efforts is a massive upside.
To correctly support developer productivity — and the consequential growth to your organization’s top-line performance — you must have a clear understanding of threats that GenAI (and code in particular) can create for your organization. Let’s start there.
The threat landscape for GenAI code: What CISOs must know
An under-sung component of a CISO’s role is protecting against insider threats. For the purposes of this section, we’ll exclude threats arising from genuinely malicious actors (e.g. corporate espionage). Instead, we’ll focus on security risks that present themselves as an understandable byproduct of developers using GenAI code to enhance their programming workflows.
1. Data leakage
The first and most obvious insider threat is data leakage. High-profile case studies abound, and these are just the tip of the proverbial iceberg. A small selection of data that can improperly leave the organization through (often unintentional) negligence includes:
- Proprietary Code
- Strategic Plans / Initiative Names
- Advertising Copy
- Org Charts and Reporting Hierarchies
- Data sets
- PII, or other personal data such as headshots or contact information
2. Vulnerable code
A key concern among security experts is the risk of a GenAI providing seemingly innocuous code that introduces security vulnerabilities. Much work has been done by firms developing models to reduce this threat. However, models have been trained on both educational/example malicious code and undiscovered vulnerable snippets that exist in various public repositories.
Most malicious code requires aggressive prompt injection to elicit from high-tier LLM models. Vulnerable code is a different story entirely, especially given the constantly shifting nature of threats to software.
Any code with a GenAI Bill of Materials (which will represent most codebases) should be consistently analyzed with just as much rigor as human-written software. Here’s a bit more detail about what GBOM needs to look like.
A unique and very real risk presented by GenAI is that of impersonation. The ability to “deepfake” an individual has evolved from mimicking writing style to voices, to full-fledged video.
A particular challenge is that leaders, more than ever, are also responsible for being public brands. Your C-level executives (including yourself!) have probably created a digital trail of audio and video sufficient to train a digital impersonator. Other members of your security or IT team may have done so as well via public social media presence.
Processes that allow the use of password reset or the registration of new employees during onboarding are particularly vulnerable. Multi-factor authentication, the use of passcodes, or requiring in-person presence / live video chat can at least start to mitigate some of this risk.
4. Spear phishing & target research
Any CISO who’s tracked the performance of a phishing training campaign can tell you the difference in effectiveness between a traditional phish and a spear phish. When an attacker is armed with details about their target, the level of threat posed by phishing is radically altered.
- Click rates ramp up to more than 53% - 3x that of a traditional campaign
- Sources vary but suggest that somewhere between 90% and 95% of successful attacks come from spear phishing.
Individual attackers can take multiple approaches. For extremely high-value targets, groups can leverage GenAI for high-quality target research and use manual human oversight to maximize effectiveness.
Perhaps more dangerous, groups can automate “good” (as opposed to great) research and individualization for campaigns. Grammar and tone can be policed in many languages, a typical weak spot of most phishing attacks.
5. Rapid development of malware and exploits
The same improvements in effectiveness that are available to traditional developers today are also available to those developing malware and exploits. Below-average attackers have the ability to vastly improve their position, and experienced malware developers can quickly analyze codebases for vulnerabilities with the latest tools.
Getting ahead of the problem
Once you have a clear understanding of which threats are applicable to your organization, and their magnitude, you can start preparing to deal with it.
Your first responsibility, before driving tasks to completion, is that of an educator. You are ultimately responsible for proactively informing your executive peers about GenAI risks.
At the nexus between technology/productivity and safety/security, you are the ultimate broker of your company’s posture and trajectory. The risks for attacks are both the obvious first-order consequences, and the second-order regulatory threat that is quickly expanding. Both must be managed.
Here are some resources to evaluate for your organization’s toolkit.
Today it seems as if every security tool offered to an Operations or Security group includes AI. The reality; however, is that the ability to use AI to defend against threats is still nascent.
Actual performance varies. Some implementations are the bare minimum required to market a capability, while others provide a rich feature set that continuously learns from threats.
Ironically, these tools may carry many of the same risks that other AI-powered platforms create elsewhere in the workplace. With this consideration in mind, when upgrading or selecting something that purports to use AI, inquire about your data sovereignty. Ask your legal counterpart to chime in, particularly to help comb through licensing agreements to see how your traffic and user pattern data will be handled.
Monitor and react
GenAI’s role as an asset to security teams starts at a network or environment’s boundaries(PDF). Phishing screens have become more robust as the volume of spam and phish emails has exploded.
Models for user behavior can track insider threats as they emerge, and often serve as double-duty tools to track compromised accounts.
A massive and growing threat vector continues to be misconfiguration of cloud resources and a poor understanding of attack surface areas.CISOs should work with DevSecOps teams to develop or increase Infrastructure-as-code capabilities, and deploy outside-in monitoring tools to protect themselves.
Integrating automated and continuous codebase scanning for internal use is a significant win. The barriers to entry here are lower than ever, as any mainstream DevOps tool is going to allow the integration of a scanner within minutes.
If your organization is just starting with this practice, focus first on in-flight development in modern languages and platforms. Expand use to software that handles sensitive data next (even if it’s legacy), and then consider applying it to slower-moving or more niche pipelines if your resources are available.
Regulatory and compliance
While perhaps not directly under a CISO’s purview, your team may be expected to implement systems to enforce compliance with regulatory and legal requirements. When it comes to GenAI, there is much to be concerned about.
GenAI has developed its “Intelligence” using data produced by human actors and their content, much of which is copyrighted. A flurry of lawsuits are ongoing, and will continue to flood in. GenAI creates some unique challenges for the legal system.
As with any major technological innovation, there are races to both profit from it and control it. The US has passed some baseline legislation, and the EU has followed suit with even more. Controlling the use of any AI model, including those that support GenAI, will be joint responsibility of the technology team.
Suggested protective measures
With a lot of detail above about what threats are and how to approach them, there is an opportunity to summarize down to brass tacks. What should CISOs do next?
- Meet with your infrastructure team to see what tools you have at your disposal to control access to GenAI platforms. Larger, more conservative enterprises are blocklisting traffic from corporate networks to popular platforms’ URLs.
- Meet with Legal/Compliance to understand what may fall to you to help control, both now and in the future. Be prepared to educate them about the nature of incoming risks.
- Open a dialogue with your CTO around the balance between governance and evolution. While the risks of unfettered access to GenAI are high, so too are the risks of blanket bans and the throttling of innovation. Evaluate what capabilities can be developed in-house to strike this balance.
- Inform your operations teams about the emergent threats created by GenAI. Your team’s baseline quality for phishing test campaigns or patching SLAs may need to be upgraded in this new world.
- Investigate the use of new or updated tools to meet the challenge of new threats. These may include those that specialize in detecting and flagging GenAI content in your internal software supply chain, or in the organization’s communications channels. It also may include additional identity verification processes for your IT support teams.
Keeping track of global GenAI compliance standards
Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s GenAI Code compliance Database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.
About Sema Technologies, Inc.
Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.
Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.