Americas

Asia

Oceania

by Prasanth Aby Thomas

Singapore unveils AI system guidelines, emphasizing secure-by-design

News
22 Oct 20244 mins
RegulationSecurity

A secure-by-design approach for AI systems can be challenging, as it requires specialized skills and may involve significant costs.

Singapore has rolled out new cybersecurity measures to safeguard AI systems against traditional threats like supply chain attacks and emerging risks such as adversarial machine learning, including data poisoning and evasion attacks.

In its Guidelines and Companion Guide for Securing AI Systems, Singapore’s Cyber Security Agency (CSA) stressed that AI systems must be secure by design and secure by default, like other digital systems.

This approach aims to help system owners manage security risks from the start, the agency added.

“To reap the benefits of AI, users must have confidence that the AI will behave as designed, and outcomes are safe and secure,” the CSA said in the guide. “However, in addition to safety risks, AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system.”

The guidelines don’t focus on AI safety or broader issues commonly associated with AI, such as fairness, transparency, or inclusion, nor do they tackle cybersecurity risks introduced by AI systems.

While some recommended actions may overlap with these areas, the guidelines also don’t specifically address the misuse of AI in cyberattacks, such as AI-enabled malware, or threats like misinformation, disinformation, and deepfakes, the CSA said.

Rise in AI regulations

The guidelines come at a time when authorities in other regions are also ramping up their AI regulations, each taking a distinct path.

“The sovereign policy frameworks for AI cybersecurity diverge in their approaches,” said Prabhu Ram, VP for Industry Research Group at CyberMedia Research. “Singapore prioritizes practical implementation throughout the AI lifecycle, while the EU emphasizes regulatory compliance aligned with risk-based classifications. In contrast, the US seeks to harmonize security with ethical considerations.”

These differing strategies may reflect each region’s priorities and the unique challenges they face in regulating AI technologies, with some leaning toward strict oversight and others focusing on flexible, adaptive frameworks.

“Singapore’s AI guidelines are a good balance of legal and practical approach,” said Keith Prabhu, founder and CEO of Confidis. “It is neither too legal like EU’s AI Act nor just principle-based like the US’ AI Bill of Rights.”

A key point to watch will be how the industry reacts to Singapore’s guidelines, especially given the strong resistance AI regulations have faced in other regions.

Last month, 49 executives, including Meta CEO Mark Zuckerberg and the CEOs of SAP, Spotify, and Ericsson, warned in an open letter that the EU risks falling behind in AI due to its “fragmented and unpredictable” regulatory environment.

In the US, California’s Governor vetoed a proposed AI safety bill after tech giants like Google, Meta, and OpenAI voiced concerns that it could stifle AI innovation and potentially drive companies to relocate.

Challenges for enterprises

The key distinction for enterprises lies in Singapore’s emphasis on a secure-by-design approach, which integrates security measures at every stage of AI development and deployment.

For enterprises with limited to negligible cybersecurity expertise, implementing a secure-by-design approach for AI systems can be daunting due to the need for specialized skills and potentially high costs.

“The rapid pace of AI innovation often pressures organizations to prioritize development over security, risking overlooked vulnerabilities,” said Ram. “Balancing these priorities is crucial to maintaining system integrity. Additionally, the evolving threat landscape demands continuous adaptation of security strategies, which is further complicated by stringent regulatory compliance requirements.”

Prabhu added that in many organizations, security is still an afterthought. Even with secure software development practices (SSDLC) in place, many companies still find it challenging to implement “security by design” from the start. “For organizations that have ‘security by design’ process already in place, it should not be a challenge to adopt ‘secure by design’ principles for AI systems,” Prabhu said. “For those who don’t, it will be a struggle unless they implement this process quickly.”