Why AI needs to be secured differently

genAI-scaled.jpeg?w=1024&h=683&scale

Rapid advances in Artificial Intelligence (AI) have transformed the business world, creating exciting opportunities and security challenges to contend with. But how should we secure the development and use of these new AI tools? And why does securing AI require an unprecedented approach?

 

AI was once a frontier technology, but it is now a ubiquitous tool. AI is already adding value in many industries, from healthcare and pharmaceuticals to automotive, retail, and software industries. AI has revolutionised solutions, narrowed skill gaps, and improved decision making and productivity.

 

Many AI-enhanced technologies will be life-changing. AI-based solutions are being used to interpret CT scans and x-rays with higher accuracy, and AI-based improvements in drug discovery and clinical trials will likely have a lasting impact on the length and quality of our lives.

 

In addition, AI tools will be foundational in developing other, future technologies and innovations, from aircraft and chip design to software development and expert recommendation agents. AI is weaving itself into the very fabric of how we build new technologies. As AI becomes a creator of new innovations, the risks associated with its use become increasingly complex. This will require a fundamental reevaluation of security measures to address the risks associated with the pervasive nature of new AI tools.

 

The world had an opportunity to secure foundational technology once before, with software development. While software has been around for eight decades, we are only just getting a handle on how to do it securely.

 

Although the concept of “shift left” to manage security early in the software development process been around for more than 20 years, it is still a practice that many struggle to manage.

 

This time, with AI, our approach needs to be different.

 

With AI, the need for security measures is even more urgent, requiring a dual shift – both left, to design and development, and to the right, to secure operations, at the same time. AI development is a priority for most organisations, but the speed of technological transformation means they are effectively having to build the airplane whilst learning to draw the design and learn to fly, all at once!

 

Last week, in cooperation with the US Cybersecurity and Infrastructure Security Agency (CISA), and the National Security Agency (NSA), the UK’s National Cyber Security Centre (NCSC) authored a comprehensive guide to address the unique challenges of securing AI.

 

The “Guidelines for Secure AI System Development”, which was developed with the support of more than 20 other government agencies in 16 different countries, highlights four key areas of AI security:

 

  1. Secure Design: understanding and modeling risks and threats and topics and trade-offs to consider on system and model design.
  2. Secure Development: ensuring a secure development life cycle, including supply chain security, documentation, and asset and technical debt management.
  3. Secure Deployment: protecting AI systems infrastructure and models from compromise, threat or loss, including developing incident management processes, and responsible release.
  4. Secure Operation and Maintenance: Securing and managing deployed systems, including logging and monitoring, update management and information sharing.

 

These guidelines provide a solid foundation to consider where and how organisations need to apply their efforts. However, these guidelines alone won’t ensure secure AI development. To be successful, they must be supplemented by efforts to bridge the gap between: product development and AI development, led by data scientists; AI security, managed by IT security organisations; and the professionals in charge of corporate risk and governance.

 

Achieving a secure AI innovation environment necessitates creating a continuous improvement loop that seamlessly integrates AI development and security efforts at an operational level. Bridging the gap between data scientists and IT security organisations is crucial to achieve this and to identify and mitigate evolving threats in real-time.

 

Most importantly, the challenges ahead will rely on a robust new wave of innovators to create the next generation of tools to leverage AI and secure its use.  At GALLOS, we are committed to investing in and building new AI-centered innovations. As a venture investor and venture studio, we invite you to engage with us to invest in and help shape the future of secure AI. Please do reach out with your ideas – together we can navigate the evolving landscape of AI technology and ensure that its transformative potential is harnessed responsibly and securely.

Author : William Kilmer
1

GALLOS Technologies invests in Second Front Systems' Series B round

GALLOS Technologies invests in Second Front Systems’ Series B round to collaborate on accelerated delivery of commercial software to global…
165A7707-director-jeremy-fleming-768x512px

Sir Jeremy Fleming joins GALLOS as Chair of the Strategic Advisory Board

Sir Jeremy Fleming joins GALLOS as Chair of the Strategic Advisory Board, to support early-stage companies developing cutting-edge security technologies…
GALLOS-AI-Blog-1

Emerging AI Models and the Problems We Can’t See

The rapid deployment of AI models by Microsoft and Google have created an acceptance that these models are not perfect.…

Building a safer, more prosperous world