Americas

Asia

Oceania

debbiegarside
Contributor

The future of AI and endpoint security, part 2

Opinion
21 Feb 20185 mins
Data and Information SecurityHigh-Performance ComputingTechnology Industry

Do we really have to wait for quantum computing for true security at the endpoint?

In The future of AI and endpoint security, I wrote about the need for more AI at the endpoint to protect end users.

But both the nature of endpoints and the science of AI are evolving at such a pace that we need to start thinking about even more advanced solutions.

We currently think of endpoints as being PCs and laptops, and occasionally tablets, mobiles and other internet-enabled devices.

The future of endpoints is bright, with autonomous vehicles and robotics being added to the mix, though with this bright future comes heightened risk. 

We all know the risks of end users in the workplace, from the opportunists pilfering data to take to their next position to the hapless mistakenly sending sensitive enterprise data to the wrong person.

And we have all seen and heard a lot about the risks of autonomous vehicles, not to mention the terrifying predictions around AI and murderous robotics – lately supported by highly realistic sci-fi videos such as this one entitled ‘Slaughterbots’ from the University of California-Berkeley! 

AI and machine learning has been around for a long time. Generally, we throw compute power at the problem, such as high-performance computing (HPC), and it performs. But with autonomous vehicles already containing millions of lines of code maybe it’s time we looked at bringing some HPC style tricks to the endpoint. 

The science of AI is evolving rapidly and is multi-disciplinary in nature – including natural language processing, audio recognition, computer vision and user and entity behavior analytics (UEBA), with the primary goal being reliable real-time predictive analytics. In terms of system and data security, current ‘after the horse has bolted’ traditional data management principles are fast becoming obsolete, with rear-view business intelligence data becoming less and less relevant as decision making becomes more agile and powerful, powered by front-view predictive analytics which in turn are being powered by AI.   Get predictive analytics right and we can take the fear out of autonomous future.

If we can be sure of computer-based predictions, we can remediate in real time as threats emerge, rather than relying on a human to interpret the reams of threat data currently being produced using AI and machine learning linked to the cloud.

For future predictive analytics to be reliable, context needs to be built in, which means much more data needs to be analyzed. How can we analyze more data at speed on endpoints, in cars and within robotics? 

Some would say we need to wait for quantum computing, where compute power uses both 0s and/or 1s at the same time on the same bit. But quantum computing is not quite there yet; likely it is more than 5 years away, in all probability nearer to 10 years.  

Maybe the next best answer while we await quantum can be found in the traits of HPC, where for some years now the power and versatility of the graphics processing unit (GPU) has been exploited to great effect. 

A CPU consists of a few cores focused on sequential serial processing, whereas GPUs are packed with thousands of smaller cores designed for multi-tasking.

Most AI algorithms are distributed. Deep neural networks and most AI and machine learning can be classed as parallelizable problems meaning parallel computing solutions like GPUs can speed up the majority of the algorithms in AI, with a few exceptions such as tree traversing or recursive algorithms.  GPUs are magnitudes faster for neural networks and pattern matching.

While we wait for quantum, and indeed possibly even to support quantum, GPUs are the interim solution for the AI-based endpoint security capability that the future undoubtedly requires.  Anything that can be processed in parallel, like machine learning algorithms, should be directed to the GPU by design – parallel processing by design is a must for the future (I said this in 2012 and 6 years on I am still saying it!).

A future that sees the need for activity recording for UEBA that ultimately supports the predictive analytics that will undoubtedly be required to intervene in real time to stop your employee from compiling and stealing your data or to hit the ‘kill’ switch before your delivery drone decides to attack you!

Want to turn your PC into an AI Supercomputer? Check out the video from Huang at the NVIDIA Tech conference last May. The Tesla 100 is currently capable of processing at 900GB per second for hyperscale scale out. If you think that’s great, Tesla 100 was superseded in December by the TITAN V which according to NVIDIA is ideal for developers who want to use their PCs to do work in AI, deep learning and high performance computing.” Now that is some piece of kit and at $2,999 it has a hefty price tag! 

PC supercomputers for AI are already here, albeit somewhat pricey, but it won’t be long before you see them utilized for AI and endpoint protection. The future of endpoint security is AI, facilitated by PC supercomputers exploiting ever more powerful GPUs.

debbiegarside

Debbie Garside is founder of GeoLang, a provider of sustainable cyber solutions, and a renowned cyber security and cloud computing expert.

Debbie has been an entrepreneur successfully running IT companies for past 25 years. She is an expert in cyber security and natural language, was appointed the first Prince of Wales Innovation Scholar at the University of Wales and has just finalized her PhD thesis on Human Visual Perception in Cyber Security – her related patent to a new Pseudo-isochromatic second generation CAPTCHA system based on her PhD has been granted. As the Principal UK Expert for Language Encoding, Debbie was until recently editor of two international ISO standards, and a BSI and ISO Chair.

Also a member of the advisory board for HPC Wales, a €40 million high performance computing project, Debbie is a named contributor to a number of internet standards produced by the Internet Engineering Task Force, and has been an advisor to Wikimedia Foundation (overseeing Wikipedia activity) on natural language.

Debbie currently sits on the KTN Defence and Security Advisory Board and is a member of the Cloud Industry Forum. Debbie recently accompanied the UK Prime Minister on a bi-lateral trade mission to India as part of a “Best of British” showcase. Debbie is also the Product Owner for Ascema feeding insights from industry into product development.

The opinions expressed in this blog are those of Debbie Garside and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author