Americas

Asia

Oceania

Christopher Burgess
Contributing Writer

You’re not doing enough to educate insiders about data protection

Opinion
07 Aug 20246 mins
CSO and CISOData and Information SecurityThreat and Vulnerability Management

Given the deluge of news about hacks and attacks, cybersecurity’s focus on outside threats is understandable, but woe be to the CISO who doesn’t address inside risks early and often.

Our insiders need to be better educated in the art of protecting data — and by extension their own livelihoods. Everyone knows about risks from outside an organization, that’s the primary function of cybersecurity. But insider risk has been steadily rising for years, which means someone is still not getting the message despite employee training programs that purport to make users aware of the risks.

The most basic security awareness programs educate the populace on the threat posed by outsiders and miscreants trying to purloin the information for their own nefarious purposes via phishing, network attacks, social engineering, and much more.

But employee or user training must include an increased focus on what constitutes insider risk, and it can’t be a one-and-done deal — it must be an ongoing effort.

Now with the ubiquity of artificial intelligence in our software and the widespread availability of generative AI, it’s more important than ever to ensure that awareness-raising efforts include how the use of AI can eliminate trade secret protection with the click of a mouse.

The lack of safeguards for data, security and privacy will lead to continued instances where entities harvest user data, put it into a data lake and harvest the data to train their AI engines. Similarly, the publicly available AI engines, such as ChatGPT, are not immune from being breached themselves.

CISOs should review the US government-wide AI policy

If you need a roadmap to implementing better internal policies for handling data, take a look at a proactive approach rolled out earlier this year by the US federal government, the Office of Management and Budget’s policy concerning federal agencies’ use of AI.

What makes this OMB policy remarkable is that it is the first government-wide policy directed at limiting the risk presented by AI while harnessing the benefits. The clock is ticking for all federal agencies, who have until December 1, 2024 to “implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety.”

CISOs everywhere would be well served to consume the OMB policy, lay it alongside their own internal policies concerning implementing AI (both internally and externally) and juxtapose it with the AI protections of vendors, partners, and clients.

The OMB policy’s implementation will extend to those entities engaging with government with AI as part of their solution. While it focuses on the protection of individual rights and security, the safeguards absolutely encompass the safeguarding of data, and this is where CISOs may harvest insight.  

Shadow AI is alive and well and living in your system

I’ve opined about shadow AI in the past, that it is a reality and like shadow IT harnessing these instances is often like trying to stop water from overflowing the dike — water always finds a way.

We all remember Samsung being among the first to learn through experience when one of their engineers pushed a design into Open AI’s engine with the intention of improving it but inadvertently exposing a trade secret.

No doubt every CISO has an anecdote or two about employees taking company internal data and sending into AI tools to refine and improve.

Indeed, during the most recent RSA Conference I had executives remark on more than one occasion how marketing plans were being revealed through AI engine queries such as “how does Xyz Inc. plan to go to market?” The engine had been schooled on the content previously and presented it in the response — someone somewhere had loaded the go-to-market into the AI to refine and the AI engine spat it back out, a bonanza for a competitor making such a query.

Similarly, professional social networks, such as LinkedIn thrive on encouraging individuals to highlight what they are working on, what they are researching, where they are traveling, and who is on their team.

From a competitive intelligence perspective, the urge to engage in sharing is directly filling the blanks for those who may wish to garner information about the referenced entity, and in the event a piece of unreleased or private technology is shared, the trade secret protection goes out the window as the content was not adequately protected.

What can CISOs do to get their message across about inside risk?

The solution is not complex, though it does require more than pronouncements or edicts to the employees, contractors, partners and vendors.

Joe Payne, CEO of Code42, which specializes in mitigating insider risk, told me that the company has seen a drop of approximately 32% in code exfiltration, which it attributes to the positive effect of in-the-moment training. He continued that training videos offering clear and unambiguous feedback on an observed event, and doing such in a positive manner, has served to educate and deter.

While implementation of insider risk management tools will serve to enhance intellectual property protection against leaks or theft, having the training element in place is key.

Furthermore, CISOs need to dig in and understand the why behind the behavior they want to discourage. Why did the employee move the IP from a protected environment to an unprotected or unapproved environment? Was it on their own initiative? Where were they told to do so by a superior?

Perhaps the tools available were inadequate and the pressure to complete the task was viewed as more important than bending the rules.

All of these could be unintentionally placing the entity at greater risk, and that in-the-moment training and circling back to acquire feedback gives the CISO the needed data to enhance the training environment, take additional action if required by reaching into the management chain and adjusting the direction of subordinates.

Christopher Burgess

Christopher Burgess is a writer, speaker and commentator on security issues. He is a former senior security advisor to Cisco, and has also been a CEO/COO with various startups in the data and security spaces. He served 30+ years within the CIA which awarded him the Distinguished Career Intelligence Medal upon his retirement. Cisco gave him a stetson and a bottle of single-barrel Jack upon his retirement. Christopher co-authored the book, “Secrets Stolen, Fortunes Lost, Preventing Intellectual Property Theft and Economic Espionage in the 21st Century”. He also founded the non-profit, Senior Online Safety.

More from this author