CCPA

Steps to Address the New California Audit Rule That Seeks to Reset Reasonable Security


The California Privacy Protection Agency (CPPA) has approved a new rule (Rule) requiring many companies that collect consumers’ PI to complete a detailed annual cybersecurity audit. These audits must consider two dozen cybersecurity practices, representing the regulator’s redefinition of what constitutes “reasonable” cybersecurity efforts under California law.

The state’s detailed list of cybersecurity controls could emerge as a uniform baseline in the United States for what comprises reasonable security, but “it sets a high bar. A lot of these controls on the list are not shocking to see, and they’re not rocket science. But, together, they form a higher standard than most other regulations require,” Perkins Coie partner Amelia Gerlicher told the Cybersecurity Law Report.

The audit reports will not be public but may be requested by regulators. Senior executives must publicly certify satisfaction of all audit requirements.

While the deadline for audit reporting begins in 2028, practitioners suggest that companies complete a robust internal audit in 2026 to give ample time to improve on weak points in their cyber programs. With insights from Blank Rome, Perkins Coie, Polsinelli, and Shook Hardy & Bacon, this article sets out steps for companies to consider while conducting the recommended preparatory audits. It also examines less-standard cyber controls among California’s required measures, cost and timing concerns, and risks tied to the ultimate audit report.

See “Show Me the Data: How to Conduct Audits for Data Minimization” (Nov. 18, 2020).

Assessing Applicability and Inventory Data Flows

The Rule, which is a CCPA regulation, applies to businesses processing the sensitive PI of at least 50,000 customers, or those making 50 percent of annual revenue from selling or sharing personal data. The mandate also applies to businesses that process the PI of 250,000 or more individuals and achieve at least $28 million in annual gross revenue.

The law is effective January 1, 2026, but gives a long lead time. Businesses with more than $100 million in revenue must file audit certifications by April 1, 2028, those earning more than $50 million by April 2029 and the rest by April 2030.

The purpose of these audits is to protect personal data. As such, companies will need to inventory their data processing as well as their IT systems and security controls. They should make sure to consider and document the PI sitting in or transiting through cloud infrastructure, development pipelines, SaaS (software-as-a-service) platforms and storage repositories, among other locations.

See “Updating Compliance Programs to Address the CPPA’s Regulations on ADMT and Risk Assessments” (Sep. 17, 2025).

Conducting a Thorough Pre-Audit

California’s audit requirements differ from the approach of more established cyber audits. To address the high level of detail and strict review that California requires, companies should strongly consider a thorough dry run for the audit, experts who spoke to the Cybersecurity Law Report agreed. Below are suggested steps for tackling the process.

1) Set Goals and Expectations

“A pre-audit can help the company understand what this report is going to look like ultimately, and then remediate ahead of time,” before the real stakes begin, Gerlicher said.

The Rule gives the auditor a broad and deep mandate. The auditor must “opine on how well the company protects personal information,” and the audit report will include a detailed description of the company’s security gaps, Gerlicher noted. The auditor must be independent, but can be either internal or external.

“Pressure testing this California audit ahead of time is a really, really good idea. That way the formal audit is hopefully clean and there’s not much the company would have to do,” advised Blank Rome partner Phil Yannella. Those who coordinate pre-audit efforts, however, will need to build more time into their schedules, he told the Cybersecurity Law Report.

See our two-part series “Amendment to NYDFS Cyber Regulation Brings New Mandates”: Governance Provisions (Dec. 13, 2023), and First Compliance Steps (Jan. 3, 2024).

2) Address Privilege Issues

The Rule’s requirement around identifying security gaps has “always jumped out at us as the most problematic. Combined with the fact that the final audit report is not going to be privileged, that seems like a roadmap for all sorts of critics,” Gerlicher stressed.

Companies may be able to conduct the pre-audit under privilege by doing it for the purpose of obtaining legal advice about complying with this law and others. Thus, “legal should be involved in scoping the pre-audit and hiring the technical experts,” Gerlicher advised.

See our two-part series on cybersecurity practices for private equity sponsors and their portfolio companies: “Incident Prevention and Response” (Feb. 28, 2024), and “Due Diligence and Post-Acquisition Efforts” (Mar. 6, 2024).

3) Build On SOC 2 Audits

Companies may use other audits to fulfill their obligations under the Rule as long as they satisfy the Rule’s requirements. Many companies across industries have pursued certification under the SOC 2 standard (Security and Organization Controls), but SOC 2 audits differ in approach and reporting detail from the California audits.

“One open question is how close the California review can get to the types of cybersecurity evaluations that people want to do anyway,” Gerlicher said. SOC 2 testing, for instance, lets a company define the specific controls to test. But the Rule flips that approach on its head by taking the choice away from the company and defining the controls that must be reviewed, she noted.

The most significant difference between the SOC 2 and CCPA audits is the review standard. SOC 2 “is an attestation exercise,” allowing the auditor to rely on the cyber team members to report what they have done, Polsinelli shareholder Laila Paszti told the Cybersecurity Law Report. “This [CCPA] audit is an evidence-based approach. The auditor has to validate [compliance],” she explained.

The California audits likely will add costs. “Many companies have to adhere to SOC 2 for contractual requirements or other reasons,” so they cannot simply swap one audit out for the other, Paszti pointed out. Companies may be able to get a head start by mapping the company’s SOC 2 report to the Rule’s roster of items, she added.

Many companies do not arrange for their SOC 2 audit reports to include gap assessments and remediation plans, which are both required to satisfy California’s requirements, Yannella highlighted. “Anyone who wants to use a third-party SOC 2 auditor for this audit has to be mindful of these additional requirements,” he stressed.

4) Anticipate Multiple Types of Costs

Companies should not delay addressing the California audits, as cyber teams will “need resources for technical implementations” for any areas or components that the auditor deems lacking, Shook Hardy & Bacon partner Colman McCarthy advised. “If you need an antivirus or endpoint detection tool, or additional firewalls, any of those cost money. Don’t skimp on the qualified personnel either,” he advised. Buying and launching additional security tools can take months, he cautioned.

The audits required by the Rule are beyond what many companies have experienced, so they should expect higher costs, warned Yannella.

As 2026 approaches, companies should plan security and compliance budgets appropriately with the Rule’s compliance deadline only two years away. “If you have issues with money now, you’re not going to have the ability to be flexible later,” as delays may raise costs, Gerlicher advised.

For some cyber chiefs, the costs and burdens of the audit might end up modest, Paszti posited. Some client companies “have been ahead of the ball because they were not looking for laws to come in to mandate their behavior,” she reported.

See “Updating Compliance Programs to Address the CPPA’s Regulations on ADMT and Risk Assessments” (Sep. 17, 2025).

5) Pull and Gather Evidence

A key novelty of the Rule is the need to generate the evidence for the auditor. A pre-audit coordinator launching the effort should expect patches of turbulence and pushback. “There is typically a lot of thrash in the process because the company has to figure out who can identify the evidence” for each control and arrange for them to generate it, Gerlicher observed.

Pulling evidence for each control can be involved, Paszti noted. For example, an auditor evaluating patching and vulnerability management might request three months of vulnerability scans to confirm how many critical and high severity vulnerabilities the company found – then ask for patch deployment logs to verify it addressed the vulnerabilities, she outlined.

“Companies should stand up what I call an evidence library,” Paszti urged. This would include “policies, configurations, test results [and] remediation logs. Having those all will help simplify the audit process, and support any certifications,” she said. Companies also should consider collecting pre-audit risk assessments, and records of employee training and mitigation actions.

Working to gather the documentation typically requires collaboration between the organization’s legal and technical teams, a useful preparation for when the stakes rise with the official audit, Paszti highlighted. The pre-audit effort may also reveal whether the company should expand its logging and recording of security activities.

See “More Regulators Accept New Tool to Streamline Companies’ Cyber Compliance” (Jan. 26, 2022).

6) Complete and Document a Gap Analysis

The Rule does not require a company to implement the entire list of controls it lays out for auditing. However, “companies should take [the Rule’s list of controls to audit] as strong guidance that this is what California regulators consider to be a good approach to their information security program,” so they at least should assess whether each component makes sense for them, McCarthy suggested.

Companies should increase their efforts to record the considerations around whether to use each recommended control, Paszti suggested. “The auditors will decide which controls apply based on the business’s size, complexity and data sensitivity,” she said.

Before completing the analysis, organizations should emphasize the riskiest areas, Paszti advised. “Understand the company systems, how users interact with it, the risks and what controls would address those risks,” she urged.

A common approach for analyzing compliance gaps is to create a matrix or spreadsheet indicating the status of each cyber control, its level of implementation, the evidence of that implementation, whether it satisfies the chosen standard, the requirements for the company to upgrade the measure to the standard and the responsible parties.

See “Practical Compliance Implications From NYDFS’ Healthplex Settlement” (Sep. 17, 2025).

7) Address Less-Standard Controls

The Rule includes several controls beyond the standard package of requirements set forth in other state laws. Companies commonly are diligent with longstanding security measures like authentication, encryption, firewalls and access controls, Yannella observed. But “the audit controls for California include some adjacent areas of data management that are often weak points in cyber and data governance programs,” he said. The following are some areas of the Rules that may challenge companies.

  • Data Retention and Disposal of PI. Many companies have not gotten far with these tasks, despite their becoming mainstay requirements of both privacy and security laws, Yanella reported. “Companies generally know that data retention is really important. It’s on their to-do list, but it often gets ‘back-burnered’ because other emerging projects take everyone’s attention. This reg is going to push this issue to the front burner,” he observed.
  • Data Mapping. The Rule and Minnesota’s privacy law make data inventories mandatory – a change that may prompt companies to be more thorough or detailed than they previously have been, Gerlicher said. Cybersecurity teams may not associate data governance with the mapping task, McCarthy noted, but the company’s security benefits when it maps in detail “what data the organization has, where it is, what it is used for, how long the company retains it and when you get rid of it,” he explained.
  • Vulnerability Disclosure Programs. These have not typically been a focus of cybersecurity audits, Paszti noted. “Running a bug bounty or similar program is technically and operationally challenging if the company hasn’t done that before,” she said.
  • Network Segmentation. This is not a widespread requirement. “Network segmentation can be difficult, especially when the company has legacy environments or has integrated with acquired companies, which requires a significant network architecture redesign,” Paszti noted. Companies should not segment networks unthinkingly, McCarthy cautioned, adding that the sensitivity of the PI and its location merits more protection.
  • Secure Software Development and Deployment. The Rules broadly phrase these requirements without specifics. For the many companies that have not previously addressed this task, standards exist from National Institute of Standards and Technology, the Center for Internet Security and others, McCarthy noted.

See “Cybersecurity Compliance Lessons From NYDFS’ Carnival Action” (Aug. 3, 2022).

8) Keep an Eye on AI’s Impact

Regulators are ready to scrutinize use of personal data for AI. However, the audit portion of the CCPA regulation does not mention this much-discussed concern or account for the wrinkles in AI security that have begun to challenge overall cyber compliance. “AI is a complicating development for the audits.” Companies should consider whether they “have somebody who understands the technology enough to be able to assess the risk,” suggested McCarthy.

One important step during a pre-audit is to examine the access controls around the AI use – “both who has access to it and what it has access to,” McCarthy proposed.

AI implementation is a wild card that could affect any of the Rule’s listed controls, Yannella cautioned. “Secure coding is going to be an issue with use of AI for coding purposes. The inventory of personal information and data retention” are other controls where AI use could have a significant impact.

See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).

Compliance Challenges and Considerations

Choosing and Booking an Auditor

Companies subject to the Rule must choose an independent and qualified auditor, and they should do so sooner rather than later. The Rule could create a timing bottleneck that makes it challenging to hire one, Gerlicher warned. “Eventually, we’re going to be in a world where every company subject to this Rule has to do their audit between January 1 and April 1 of every year,” she said.

If selecting an internal auditor, the lawyers should ensure the person reports to a non-cybersecurity executive or board and verify the auditor’s credentials and experience with privacy audits.

Preparing to Present Evidence That Might Be Questioned

The auditor will focus first on evidence that policies and procedures are in place, then whether the company has implemented them sufficiently, McCarthy said.

Prepare for an auditor to contest the evidence, McCarthy cautioned. For example, a cyber team might deliver a screenshot of some implementation to portray a cyber control and make available a team member for an interview to confirm measures taken. Yet, the auditor might not be comfortable relying on that as evidence and request more detailed records, he noted.

Assessing Potential Liability

“While these audits are not made public, that doesn’t mean that they can’t be subpoenaed later down the road if there was, heaven forbid, a breach,” Paszti said.

The audit’s itemization of gaps and remediation recommendations radiates “glaring liability,” Yannella observed.

Companies will need to present the audit results to senior leadership for the required executive attestation that the audit was sufficiently completed. The company submits only the attestation to the CPPA. Senior leaders hearing about the Rule are concerned about signing off on the audit, Gerlicher said. “There is still fear out there” because of regulators pursuing individual liability for executives in a few publicized cases, she reported.

See “Mitigating CISO Personal Liability Post-SolarWinds” (Feb. 14, 2024).

Staying Informed

The long lead time for submitting audits means requirements could change, so companies should monitor for CPPA updates and guidance. Companies should consider capping their pre-audit effort by scheduling a periodic refresher or update.

Companies preparing for this highly detailed and multi-layered audit should not lose sight of the big-picture reason to do it: threat mitigation and greater security. Covered companies must identify the risks, figure out how to manage them and implement protocols accordingly. Then, they “need a process to validate that they have implemented everything properly,” Paszti noted. Although the audit is a complicated and time-consuming burden, it also could be a beneficial process.

Artificial Intelligence

California’s Landmark AI Transparency Law: Compliance Considerations


Developers of frontier AI models face new transparency and safety obligations with the introduction of California’s Transparency in Frontier Artificial Intelligence Act (Senate Bill 53) (TFAIA), which was signed into law on September 29, 2025. It is the first law in the nation to specifically address frontier AI development.

The TFAIA, effective January 1, 2026, applies to developers of frontier models, which are defined by the amount of computing power used to train the model. “Frontier developers” are, in effect, developers of general-purpose AI models that were trained using high levels of computational power.

Primarily addressing catastrophic risks and critical safety incidents, the TFAIA imposes significant new requirements that demand prompt attention. Although not many companies have exceeded the high technical threshold to be a frontier developer, that number is expected to rapidly increase, bringing far more companies into scope in the near future. Downstream businesses and users will be impacted as well.

This second installment in a two-part article series, with commentary from AI law practitioners and former regulators at Crowell & Moring, Jones Walker, Mayer Brown, Skadden and Womble Bond Dickinson, provides practical compliance considerations for companies as they prepare to fulfill the new law’s obligations. Part one discussed to whom the TFAIA applies and examined the law’s reporting requirements, protections, exceptions and penalties.

See “How to Address the Colorado AI Act’s ‘Complex Compliance Regime’” (Jun. 5, 2024).

TFAIA Compliance Considerations

TFAIA imposes four major sets of obligations, some of which apply to all frontier developers, while others only apply to a narrower subset of “large frontier developers.” As discussed in more detail in part one, the requirements include: (1) publication of a frontier AI framework (AI Framework) by large frontier developers; (2) publication of a transparency report (Transparency Report) by all frontier developers; (3) disclosure of safety incidents by all frontier developers; and (4) protections for whistleblowers.

Businesses that might meet TFAIA’s applicability thresholds should consider the following as they prepare to comply with the law.

Adopting a Compliance Mindset

A frontier developer that is covered by the statute should adopt a “compliance mindset” and use the “same muscles” that it would in other compliance contexts, Mayer Brown partner Stephen Lilley told the Cybersecurity Law Report. Documentation, appropriate recordkeeping and establishing open channels of communication among relevant stakeholders are key, he said. Given that AI companies are relatively new, this is going to be a bespoke process, he added, but the company’s legal department can play an important role in helping to formulate a “practical solution tailored to the realities of the business.”

The “greatest pain points” of the statute will require affected companies to adopt and report on general safety standards and principles, predicted Matthew Ferraro, a partner at Crowell & Moring and the former Senior Counselor for Cybersecurity and Emerging Technology to the Secretary of Homeland Security. This, in turn, will require an analysis of potential catastrophic harms and any mitigation measures. However, this is most likely “surmountable,” he noted, because the companies that are initially swept up by the law are probably already conducting the relevant analyses.

Ensuring Proper Documentation

Companies should do an audit of their current documentation, figure out with the business team and engineers what the statute requires and what their company is missing, and decide on a plan of action from there, said Tyler Bridegan, a partner at Womble Bond Dickinson and the former Director of Privacy and Technology Enforcement for the Texas AG’s Office. A company should be asking itself what the California DOJ would want to see if it requests those documents during the course of an investigation, he noted. If a practice does not currently align with the law, the company should document how it has been trying to comply, he added.

See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).

Gathering the Appropriate Team to Build the AI Framework

Among other things, TFAIA requires large frontier developers to publish an AI Framework describing how they incorporate best practices and relevant benchmarks, identify and mitigate risks, respond to critical safety incidents and institute internal governance practices.

In trying to formulate a frontier AI Framework, companies should seek to draw upon a diverse array of internal actors. The team should be “absolutely cross-functional,” advised Jason Loring, a partner at Jones Walker. The legal, information security and risk management teams should all be involved, he said.

Questions regarding items such as risk mitigation will, to a “large extent,” be answered by technical experts and others who “own” the risks associated with the frontier models, Lilley added.

The safety team should be the “owner” of the frontier AI framework, since the document is focused on transparency related to risks, and “that’s basically what they’re here for,” posited Ken Kumayama, a partner at Skadden.

Formulating a Defensible Frontier AI Framework

The frontier AI Framework is going to be “critical” to a company defending the protections it has put in place, whether they be related to information use, external standards or procurement standards, Loring said.

Large frontier developers should approach meeting the AI Framework requirement by asking questions about their existing processes, suggested Lilley. It means, he elaborated, creating a “reasonable and defensible process” for answering the following questions:

  • What is our governance mechanism?
  • What are our existing risk mitigation processes?
  • What thresholds for risk have already been identified?

The goal should be to take the work that has already been done and adjust and expand it as needed to “make sure it meets the ask of the statute,” Lilley continued.

One way to approach creating an AI framework is to conduct a gap analysis about what needs to be solved for, implement a timeline for doing so, and justify and document those gaps, advised Loring. “I think the more that companies can document what their processes are in the space, the better,” he commented.

Furthermore, a company’s internal governance practices should be updated to reflect the TFAIA’s requirements. This should be done by members of the legal, compliance and product design teams to ensure a holistic and dynamic governance framework is created, advised Ferraro.

See “NIST Advances Soft Law for AI While World Awaits Hard Laws” (Apr. 19, 2023).

Training Employees

Once a company implements an AI Framework, it should make sure that it is communicated to its employees, said Loring. Reminders about high-level requirements and obligations around how to responsibly deploy AI and minimize risk should be made known throughout the company and not be limited to a specific subset of the organization, he noted, adding that doing so ensures accountability for the entire organization.

Choosing Applicable Standards and Best Practices

For companies seeking to determine which standards and best practices to apply, Bridegan recommended that to “cover [their] bases,” they should find out which AI law has the most prescriptive requirements and seek to comply with that. Or a company could take the approach of publishing a report on its website and modify it as needed as more stringent AI laws, such as Colorado’s, take effect. Where a company may be currently out of compliance with California law, California regulators may credit attempts to comply with out-of-state laws and refrain from bringing a full-blown enforcement action, he posited.

A “well-defined playbook on how exactly to do this” does not exist, so each company will have to make the determination as to what constitutes the applicable standards and best practices for itself, said Lilley. It is a bit of a “chicken and the egg” situation in the sense that the leading companies in the space will be able, to some extent, to influence what best practices are, he added. In addition, the body of standards and best practices is going to grow over time, he explained, and it will be interesting to see if there will be industry-led activities to crystalize best practices.

Large frontier developers are likely international in scale and thus will need to choose between many laws and standards to determine what is “best in class,” according to Kumayama. Companies should avoid “cherry picking” lesser standards and requirements. “Regulators will expect companies that are covered by the new legislation to hire consultants and lawyers to make sure they get it right,” since “the cost of getting it wrong” is so high, he warned.

Assessing Catastrophic Risk

TFAIA requires frontier developers to disclose the results of “catastrophic risk” assessments before deploying new or substantially modified frontier models, and to notify California’s Office of Emergency Services of any “critical safety incident” within a certain number of days, depending on the severity of the incident.

To assess the potential for catastrophic risk posed by an AI model, companies should red team the models, said Ferraro. Consistent with law, they need to see what can break in testing environments, he added. This is not a “one-and-done process,” he noted, but requires “constant vigilance, particularly as the models increase in power and ability.”

With respect to cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer, a company can probably “cut and paste to some extent” whatever cybersecurity procedures it has for protecting its most valuable information – its “crown jewels,” said Lilley.

See “Guide to AI Risk Assessments” (Jun. 18, 2025).

Hiring Third Parties

It is likely that third-party cybersecurity companies are going to play a greater role in doing risk assessments in the context of TFAIA compliance, Kumayama predicted. These vendors will “operationally set the tone for what kinds of things companies will be expected to do,” he said.

If a company is wondering whether it should hire a third-party vendor to conduct a risk assessment, Kumayama said that “the answer is generally going to be ‘yes.’” Although not hiring a vendor is “understandable” when a company is very concerned about protecting trade secrets, a third party can provide different perspectives, he explained. “So, there is a pro and a con,” he concluded.

Whether or not to use third parties for red teaming is an individual question, said Ferraro, who noted that some companies have had success with the process.

See “Managing Third-Party AI Risk” (Aug. 20, 2025).

Determining Content of Transparency Reports

One “interesting thing to watch,” commented Lilley, will be whether companies gravitate toward producing Transparency Reports that look the same or whether they will contain different approaches, including not just what is legally required but also “what’s sort of reasonable and expected and how that content shifts over time due to public attention.”

Some companies may want to include more than what is required by the statute for reasons such as branding, “in which case there will have to be a negotiation between the business and legal teams to strike the proper balance,” Kumayama said.

Solving for Risks From Deployers

The TFAIA does not regulate a deployer’s use of a frontier developer’s tool. This is a gap in the law, according to Loring. Both developers and deployers will be concerned about how tools are deployed, use cases, the type of information fed into the model and whether it can be used for training, he said. Thus, even if an assessment of risk relating to contractual relationships with deployers is not something that is required as part of the statute, it is something that developers will need to solve for, he noted.

See “Key Legal and Business Issues in AI-Related Contracts” (Aug. 9, 2023).

The TFAIA Versus the Colorado and Texas AI Laws

California’s TFAIA is unique among U.S. state AI legislation. There is a significant difference in focus between the TFAIA and the Texas Responsible Artificial Intelligence Governance Act, for instance, Loring pointed out. “The Texas law tries to solve for things like behavioral manipulation, infringement of constitutional rights, discrimination and harmful content, which is different from the California law’s approach – it applies to developers but is really focused on a very specific type of harm, essentially physical harm,” he explained. Unlike the Texas law, or certain E.U. laws, the TFAIA is not focused on high-risk systems. Rather, the California law is focused on the AI model’s capability with the aim of preventing catastrophic harm, he clarified.

The TFAIA has a higher applicability threshold and a narrower focus than the AI laws passed by Texas or Colorado, said Bridegan. There is an emphasis on notice provisions that could become a template for other states. “It allows for modifications in the applicability thresholds over time, and the statute constitutes a “stake in the sand” and a “place to start,” he noted. It puts smaller companies “on notice” of the regulations to which they may be subject in the future as they grow, he added.

Colorado’s new AI law has received pushback for being too aggressive, noted Bridegan, causing a delay in its effective date. “It does not strike me that California wanted to be that bold,” he posited. At the same time, he said, the law appears not to reach certain known harms, such as children’s use of AI chatbots.

The fact that the TFAIA bill is similar to one vetoed by Governor Gavin Newsom last year after he raised concerns about its narrow focus indicates that the new law is “not necessarily a step in the right direction,” particularly in light of the federal regulation vacuum, opined Loring.

See “Texas Adds New Type of State AI Law to U.S. Regulatory Mix” (Jul. 16, 2025).

Chief Information Security Officer

What CISOs Are Saying About Their Role in 2025


As cyber threats become increasingly sophisticated, the responsibilities of the CISO are evolving. Despite rising confidence, many leaders still feel unprepared for a major attack. People remain the top cybersecurity risk, now intensified by AI disruption and mounting boardroom expectations, according to the latest findings from Proofpoint’s 2025 Voice of the CISO report (Report).

Proofpoint asked questions of 1,600 CISOs at worldwide organizations with 1,000 or more employees. The fifth annual Report offers a comprehensive portrait of today’s CISO experience, shaped by the perspectives of global security leaders.

During a Proofpoint online program, a panel of CISOs from Air New Zealand, Cox Enterprises, Proofpoint, SLB, Solventum, Surescripts and Zurich American Insurance Company discussed key takeaways from the Report, including managing insider risk, challenges of AI and executive pressure. This article distills their insights.

See “Challenges, Risks and Future of the CISO Role” (Jul. 31, 2024).

Why CISOs Are Feeling Confident Despite Risks

Sixty-seven percent of the global CISOs surveyed stated that their own organization’s overall cybersecurity is strong, despite 76% of the group also saying that their company is at risk of experiencing a material cyberattack this year and 58% agreeing that their company is unprepared for a cyberattack, according to the Report. Several factors may explain this apparent contradiction.

CISOs are feeling confident because of the progress that has been made in cybersecurity over the last few years, said Paige Adams, Group CISO, Zurich American Insurance Company. CISOs in general have invested a lot, not only in tools and platforms, but also in awareness training, leadership engagement, and integrating security into business processes and culture. Confidence comes not from a feeling that defenses are perfect, but from seeing measurable behavioral changes, in particular, people understanding their role in protecting the organization and taking it seriously. It also derives from things like faster reporting and better collaboration around security, as well as, importantly, CISOs participating in the dialog and hopefully being brought in earlier on projects rather than being turned to “as an afterthought,” Adams observed. The confidence that has grown with such progress outweighs any pessimism that could result from realizing that cyberattacks are inevitable these days, there are gaps in most companies’ defenses, and most CISOs are faced with expanding attack surfaces, resource constraints and other challenges.

There also may be an optimism bias at play in the survey results, Adams continued. “A lot of CISOs rate culture higher [than other cybersecurity factors] because it’s actually one of the few areas where our influence and our communication have a somewhat tangible effect,” he said. “Leadership alignment certainly plays a big role in that. We can see that boards and executives are more actively engaged in cyber discussions these days and having that visibility, and that tone and that message come from the top gives CISOs greater confidence.”

“The challenge now lies in transforming confidence into resilience, ensuring that preparedness is more than a perception,” Adams stated in the Report.

Employees Still Pose Biggest Risk

Insider threats can stem from malicious actions by employees and unintentional incidents due to careless staff. Sixty-six percent of CISOs reported human error as their organization’s biggest cyber vulnerability, according to the Report, despite 68% of the surveyed group also reporting that their organization’s employees have a strong understanding of cybersecurity best practices.

Nearly all (92%) of the CISOs said that employees leaving their organization played a role in a data loss event, up from 73% last year, according to the Report.

Generative AI (Gen AI) is making it easier for insiders to exfiltrate information, reported Phil Ross, CISO of Air New Zealand. Data loss protection “alone won’t cut it. . . . You want to make the wrong thing hard and the right thing easy,” he said.

Using Tech to Counter Attention Economy

The attention economy, created by the constant competition for humans’ finite attention spans, leads to employees making mistakes and threat actors exploiting the low level of mindfulness regarding social engineering tactics such as phishing, noted Solventum CISO Param Vig.

To counteract the attention economy, companies should use technology to build a human shield, Vig suggested. “Use contemporary capabilities that allow you to drive and disseminate training across not only your global workforce, but your business partners. Don’t make it a once in a year event; drive micro trainings throughout the year.” Companies also should build role-based training for software developers and for administrators, he advised.

Additionally, companies should lean into contemporary technologies for analysis, Vig urged. “Where are those insecure patterns happening and how can you reduce them?” Explore ways to make credential management, one of the biggest risk areas, more frictionless. Something as simple as a password manager on laptops and mobile devices goes a long way, he said. “Look to automate your human tasks. Are people building servers manually? Are they approving network rules manually? Can you drive automation to do some of that work?”

Actions to Mitigate Threats From Intentional Actors

In some cases, a disgruntled employee or other insider will take deliberate actions to steal a company’s information, either with financial motivations or just to cause disruption, Vig explained. The following measures can help address intentional insider risk.

Automate the Joiners, Movers and Leavers Process

“Make sure you’ve automated your joiners, movers and leavers process to be able to kill those stale access tokens and external shares the moment those roles change,” Ross advised.

Implement Context Aware Controls at the Point of Use

Companies should embed data controls with user behavior to channel risk. “Block bulk downloads and personal email and Gen AI uploads in real time,” instructed Ross.

Have a Leaver Playbook and Align Cross-Functionally

Companies should consider establishing, for when an employee departs, a 30‑day post-notice watch window, where they use “canary documentation” that can trigger an alarm if it disappears, changes or is shared, Ross suggested.

“We cannot mitigate this by ourselves, as humans. We want to lean on tools . . . that allow you to fingerprint information that is important to you and give you alerts when those violations happen, and then you circle that capability with a cross-functional team of HR and legal to drive appropriate responses operationally,” Vig said.

CISOs also should ensure they are “aligned with [their] HR and legal department in terms of escalation,” Ross agreed.

Reducing insider risk should be made “a team sport, because there [are] other stakeholders in the organization that care about data going out the door, whether it’s your [GC] or your head of HR or whatever role,” added moderator Patrick Joyce, global resident CISO at Proofpoint.

See “Protecting Against the Security Risks of Departing Employees” (Aug. 22, 2018).

Banning AI Not a Solution

Fifty-nine percent of CISOs reported that their company blocks or restricts employee usage of Gen AI tools, the Report provides.

Trying to ban AI is pointless because employees will use it anyway, said John Driggers, vice president of Cybersecurity at SLB, whose company has been using AI and machine learning for a long time. Early on, SLB was extremely concerned about both its client data and its legal obligations around the world, so it guaranteed that it had strong AI governance. “We took part of our legacy data governance organization and we put together an AI governance and strategy work group,” he shared. That work group was made up of researchers and technical partners from different divisions within the business, as well as IT, HR and legal. “All of them had a use for some type of AI technology, and every one of their vendors was making some promise to them about the wonderful gains that they were going to get from it,” he observed.

SLB “wanted to be able to use Gen AI to provide better solutions to our customers, but we also wanted to leverage the AI to make our own employees more productive,” Driggers continued. But the company needed to have rules and regulations about how AI is responsibly used – “that is a critical part of that strategy and governance conversation both from an ethical and a legal standpoint,” he stressed.

Companies should, especially with the rise of agentic AI, tackle the issue of silicon identities, Driggers said. “You thought you had an identity problem before, and you were beginning to address it with multi-factor authentication and control of your API endpoints? Gen AI – and agentic AI, specifically – brings a whole new set of problems and challenges that your cyber team and your IT team are going to be faced with,” he cautioned, adding that the CISO is going to have to “embrace that.”

SLB is putting a lot of effort, modelled on its phishing program, into training its employees on the ethical and secure use of AI. It has set up business context rules about how employees are allowed to use AI, particularly regarding sensitive, customer or classified data. “We’ve put some pretty strong guardrails in place to implement the technical controls. And then followed that with very widespread training for all of our employees, based on their roles within the organization.”

See “Assessing and Managing AI’s Transformation of Cybersecurity in 2025” (Mar. 19, 2025).

How to Boost Board Alignment With CISOs

Sixty-four percent of CISOs surveyed agreed that their board sees eye to eye with them on the issue of cybersecurity, down from 84% last year, according to the Report.

Attention does not always equal connection – CCOs and boards need a stronger shared risk vocabulary, stressed Ben McLaughlin, Proofpoint CISO. “For CCOs, that means translating cyber into the language of impact and enterprise value, not firewalls and frameworks. And for boards, it’s about leaning in. They need to spend time and build fluency with their CCOs. They need to ask tough questions about resiliency and recovery and not just focus on compliance,” he advised.

True alignment happens with cyber being part of every strategic discussion, including M&A and environmental, social and governance, McLaughlin continued. When both sides see resilience as a driver of trust and valuation, that is when they move to a real partnership. Organizations should work to turn cybersecurity from a cost center into a value protector and, ultimately, into a value creator. It is important for CISOs to work collaboratively with their board to define things like materiality and risk appetite, he emphasized. It is also imperative for board members to meet with CISOs outside of the formal board setting to build relationships and trust. Have board members participate in tabletop exercises, he suggested. That can send a powerful message through the organization, reinforcing how important resilience is.

SLB takes an approach similar to what McLaughlin laid out. “We invite our entire board to tour our cyber facilities and to have a deeper-dive discussion about cyber,” Driggers shared, noting that his team reports quarterly to a specific committee and annually to the entire board.

The role of a CISO was once “a kind of a technical or jargon translator,” Adams observed. Today, his board and executive leadership expect him to be a strategic risk advisor who can “frame why it matters. What’s the context? How does that impact our risk profile, our governance model, our business priorities? Where and how does AI intersect with business priorities and business risk?”

See “Five Steps for Effective Board Oversight on Cybersecurity Breach Response” (Jan. 15, 2025).

CISO Burnout Likely to Increase

Sixty-six percent of CISOs surveyed agree that there are excessive expectations of the CISO/CSO, according to the Report. Those demands are the new normal, and that number will likely continue to increase, said Brian Cox, vice president and CISO at Cox Enterprises, citing several reasons:

  • The job has shifted from being a technical leader to being a business risk and strategic advisor, causing CISOs to wear many more hats, including managing other areas of technology.
  • Cyberattacks are increasing in number and sophistication.
  • Decreased board alignment is making it more difficult to get financial support. “Even I find myself competing for dollars on a daily basis, which wasn’t the case a couple years ago,” he admitted.

SLB recently reduced its number of team building events to save money, Driggers said. “That small cut, which in the grand scheme of the cybersecurity budget was relatively insignificant, probably was more hurtful than some of the larger cuts, the more strategic cuts that we made, because [those events] did affect team morale and they did affect work-life balance.”

Another trend contributing to CISO burnout is increased regulatory pressure, Joyce added. To protect themselves from burnout, CISOs – as well as their teams – must ensure that their personal time and mental health are respected, he opined.

If a CISO or a member of their team has to work through a holiday due to a crisis, they should be able to take time off after the crisis to make up for the lost holiday time, Vig proposed.

See “Advice From CISOs on How to Succeed in the Role” (Apr. 14, 2021).

People Moves

AI Governance and Compliance Leader Joins Steptoe As Partner in D.C.


Steptoe has welcomed Carl Hahn as a partner in the firm’s investigations, white-collar and compliance practice in Washington, D.C. He joins the firm after co-founding legal technology company Gentic Global Advisors, which specializes in designing operational and compliance programs to help organizations manage risks related to AI and emerging technologies.

With more than three decades of experience in corporate legal leadership, compliance strategy and emerging technology governance, Hahn expands the firm’s AI-enabled compliance and investigations capabilities. He advises clients on compliance modernization, litigation strategy and AI governance frameworks, including the development of responsible AI policies and regulatory readiness.  

Prior to co-founding Gentic Global Advisors in March 2025, Hahn served as vice president and chief ethics and compliance officer at Northrop Grumman, where he led global compliance operations, embedded analytics into enterprise systems and helped establish an industry-leading AI governance program.

For commentary from Hahn, see “In‑House Perspectives on Compliance’s Role in Managing New and Emerging Risks” (Jun. 5, 2025). For insights from Steptoe, see our two-part series “AI Meets GDPR”: EDPB Weighs In on AI Models (Feb. 5, 2025), and Mitigating Risks and Scaling Compliance in the Development and Deployment of AI Models (Feb. 19, 2025).