The White House recently held a press call to announce a new order for federal agencies. All federal agencies must establish a chief artificial intelligence (AI) officer within the organization that will oversee the organization’s approach to AI. The goal of this established position will be to effectively manage the risks of evolving AI technology within each federal organization. The chief AI officer is expected to be experienced and knowledgeable in regard to emerging AI capabilities and security threats.
Security leaders weigh in
Marcus Fowler, CEO of Darktrace Federal:
“While establishing AI leadership is an important step in ensuring the safe use of AI technologies– and there are existing frameworks around secure AI system development provided by CISA and the UK NCSC– these efforts and resources are not the only thing organizations can do to adequately encourage the safe use of generative AI technologies.
“In order to ensure the safe and effective deployment of these tools in their workplaces, it is vital that AI officers and their associated teams have a firm understanding of “normal” behavior across their networks and IT environments and take part in a dedicated effort to educate their broader organizations with these findings. Through this approach, AI executives and their teams can ensure their broader organizations are equipped with a general understanding of the use cases and risks associated with leveraging AI tools, how these issues relate back to their roles and areas of business specifically, and best practices for mitigating business risk.
“There are three areas of AI implementations that governments and companies should prioritize: data privacy, control and trust—but it’s vital that organizations remember that each of these areas require significant influence from leaders to remain effective. In addition to leveraging industry standards and appointing key leadership teams tasked with ensuring the effective use of these technologies, it’s critical that organizations also establish trust in these roles across their companies by highlighting the value that AI-focused roles bring to the broader organization. This will help to ensure that each and every team member is familiar and comfortable with the internal resources available to them – encouraging stronger collaboration between teams in tandem with the supervised use of these tools, ultimately strengthening an organization’s broader security posture.”
Gal Ringel, Co-Founder and CEO at Mine:
“These rules will be somewhat successful in safeguarding AI use, but it’s key to understand this only applies to the government, and thus, the public sector. The American private sector, from where much of the technological innovation of the past few decades has come, is still operating with mostly free rein when it comes to AI. Regarding the rules for government itself, internal assessments and oversight could provide a loophole for lax AI governance. While I understand the security concerns, independent third parties would be better suited for running AI-related assessments, which might necessitate the need to create a specific government agency to do just that.
“Utah just passed an AI law, which is opening Pandora’s Box since it paves the way for each state to pass its own AI law in the same way each state has sought to pass its own data privacy law. There needs to be a federal law that oversees the private sector, and while you don’t need to take the same risk-based approach the EU and UK have, meaningful legislation needs to come through to promote the same principles of transparency, harm reduction, and responsible usage echoed in today’s announcement.”
https://www.securitymagazine.com/articles/100557-security-leaders-weigh-in-on-the-white-houses-order-regarding-ai