
The United States and the United Kingdom have declined to sign the AI Action Summit agreement, in which signatory countries pledged to take an inclusive, open, and ethical approach to AI development.
More than 60 countries (including Canada, China, Australia, India, and Japan) have signed the declaration.
Below, security experts discuss this decision as well as offer advice for how organizations can deploy AI securely and efficiently moving forward.
Security leaders weigh in
Andrew Bolster, Senior R&D Manager at Black Duck:
This growing Atlantic AI Rift is a wakeup call for any organization looking to deploy or operate global AI solutions; the regulatory landscape is not as settled as it may seem, and while alignment to existing principles such as the General Data Protection Regulation (GDPR), California’s California Consumer Privacy Act (CCPA), and its amendment, the California Privacy Rights Act (CPRA), or Australia’s Privacy Act may stand you in good stead, that is no guarantee of continued operations.
For instance, when President Donald Trump rescinded former President Biden’s 2023 Executive Order on AI, he functionally removed any Federal level guidelines for U.S. cross-state operators managing the risks introduced by AI systems.
We’re now in the position where this fractured regulatory landscape is tempering private investment appetites, just at the same time as public investment is ramping up, such as the U.K.’s earmarking of £14bn as part of the AI Opportunities Action Plan, Frances’ coordination of €109bn in public/private partnerships in AI over the next years, and the U.S.’s $500bn partnerships around the ‘Stargate’ program.
In this kind of high-risk-high-value environment, the mergers and acquisitions markets are going to be particularly pressurized, with the mix of public and private requirements and a heightened threat model, driving the need for AI-aware security and quality attestation, which echoes recent findings.
Ms. Kris Bondi, CEO and Co-Founder of Mimoto:
If organizations only rely on historical data to train models for anomaly detection, they set themselves up to miss newer attack tactics. Bad actors are also continuing to innovate to improve their approaches. Security in a post-AI world should include anomaly detection combined with other approaches that provide real-time context. This gives security professionals the data and tools needed to make enhanced analytical decisions.
If AI is the sole driver defining threat hunting parameters without spot-checks or audits, the threat intelligence approach could eventually be focused in the wrong area. The answer is more reliance on critical thinking and analytical skills. AI is not a threat to soft skills or human involvement in the process. The smartest companies will leverage AI to enable humans to engage with few potential threats with more data available. The result should be more opportunity to use soft skills to catch threats quicker.
Craig Jones, Vice President of Security Operations at Ontinue:
In today’s GenAI-driven world, data privacy is a pivotal concern to organizations of all sizes. Interaction with such models can inadvertently lead to the sharing or exposure of sensitive information. This necessitates robust data handling and processing frameworks to prevent data leaks and ensure privacy.
Ensuring the security of sensitive data involves employing a multi-layered security approach, including encryption at rest and in transit, strict access controls, and continuous monitoring for anomalies. In case of a breach, rapid response, and remediation measures need to be in place, along with clear communication to affected stakeholders following the legal and regulatory requirements. The lessons learned from such incidents should be integrated into improving the data security framework to be better prepared for future scenarios.
Mr. Venky Raju, Field CTO at ColorTokens:
AI-driven threats underscore the need for a strong breach readiness strategy like zero trust. While patching and detection are important, we should acknowledge the fact that AI-based tools will detect and exploit vulnerabilities faster than patching or detection/response can keep up with.
Most cybersecurity products are already machine learning and generative AI to be better at what they do. However, cybersecurity leaders should look beyond the hype and understand how it benefits them. Organizations can also implement AI-based solutions to automate mundane tasks to free up cybersecurity personnel for more critical issues.
Mr. Piyush Pandey, CEO at Pathlock:
Artificial intelligence (AI) can already positively impact the cybersecurity field far beyond the simple automation of tasks. From intelligent response automation to behavioral analysis, and prioritization of vulnerability remediation, AI is already adding value within the cybersecurity field. As AI automates more tasks in cybersecurity, the role of cybersecurity professionals will evolve, as opposed to becoming a commodity. Talented cybersecurity pros with a growth mindset will become increasingly valuable as they provide the practical insights to guide AI’s deployment internally.
With the increase in regulatory and security requirements, GRC data volumes continue to grow at what will eventually be an unmanageable rate. Because of this, AI will increasingly be used to identify real-time trends, automate compliance processes, and predict risks. Continuous, automated monitoring of compliance posture using AI can, and will, drastically reduce manual efforts and errors. More granular, sophisticated risk assessments will be available via ML algorithms, which can process vast amounts of data to identify subtle risk patterns, offering a more predictive approach to reducing risk and financial losses.
https://www.securitymagazine.com/articles/101381-us-declines-international-ai-declaration-security-leaders-discuss