DeepSeek: What to know about the Chinese artificial intelligence model

  ICT, Rassegna Stampa, Security
image_pdfimage_print

A Chinese startup, DeepSeek, has announced its latest AI model. Below, cyber experts discuss the implications of DeepSeek’s emergence and capabilities. 

What is DeepSeek? 

DeepSeek is an open-source large language model (LLM). DeepSeek’s AI Assistant (also called R1) claims to operate at a lower cost than United States models, such as OpenAI. Furthermore, the application reached the top-rated free application on the Apple App Store in the United States, surpassing ChatGPT.

Technology stocks dropped as a result. Many U.S. tech companies faced financial losses, including AI company Nvidia, whose loss amounted to $593 billion. This marks a single-day record loss for any company on Wall Street. 

Steve Povolny, Senior Director of Security Research & Competitive Intelligence at Exabeam, shares, “The release of Chinese-developed DeepSeek has thrown U.S. tech markets into turmoil; this is both justifiable and also perhaps, a bit overblown. The emergence of a technology that ultimately optimizes chip usage and efficiency is likely to apply pressure on existing large chip vendors, which is a very good thing. As the adage goes, ‘Pressure yields diamonds.’ In this case, I believe competition in this market will drive global optimization, lower costs, and sustain the tailwinds AI needs to drive profitable solutions in the short and longer term.”

Why is the security industry talking about DeepSeek? 

Gal Ringel, Co-Founder and CEO at Mine, explains, “The emergence of Chinese alternatives to ChatGPT, Deepseek, poses a critical security challenge for U.S. businesses that extends beyond previous concerns about consumer data privacy; it expands to the potential exposure of proprietary business information, trade secrets, and strategic corporate information. Just as TikTok raised red flags about personal data exposure, DeepSeek’s AI tools apply the same rules of risk to sensitive corporate information. Organizations must now urgently audit and track their AI assets to prevent potential data exposure to China. This isn’t just about knowing what AI tools are being used; it’s about understanding where company data flows and ensuring robust safeguards are in place so it doesn’t inadvertently end up in the wrong hands. The parallels to TikTok are striking, but the stakes may be even higher when considering the potential exposure of business data ending up in adversarial hands.”

Cyberattacks against DeepSeek: What are the implications? 

As DeepSeek has risen in prominence and popularity, it has also experienced cyberattacks that caused it to limit registrations

Toby Lewis, Global Head of Threat Analysis at Darktrace:

The reported cyber-attack on DeepSeek likely falls into one of several scenarios, with the most probable being simply a victim of their own success — what we in tech circles call the ‘Slashdot effect,’ where their infrastructure buckled under unexpected user demand following their viral moment on the App Store.

However, we shouldn’t dismiss security concerns. Given the rapid deployment of their platform, there’s a real possibility that opportunistic cybercriminals identified and exploited potential vulnerabilities that more established platforms have had time to address. 

This incident serves as another reminder that security cannot be an afterthought — it must be woven into the very foundations of these systems from the outset. As AI platforms continue to scale rapidly and handle increasingly sensitive data, robust security frameworks aren’t just nice to have features, they’re essential.

Stephen Kowski, Field CTO at SlashNext Email Security+:

The surge in DeepSeek’s popularity, particularly overtaking ChatGPT on Apple’s App Store, naturally attracts diverse threat actors ranging from hacktivists to sophisticated state-sponsored groups seeking to exploit or disrupt this emerging AI platform. While DDoS attacks are an obvious concern, the more insidious threats likely involve probing URL Parameters, API endpoints, and input validation mechanisms to manipulate or compromise the AI model’s responses potentially. The motivations span from competitive intelligence gathering to potentially using the infrastructure as a launchpad for broader attacks, especially given the open-source nature of the technology. The high-profile success and advanced AI capabilities make DeepSeek an attractive target for opportunistic attackers and those seeking to understand or exploit AI system vulnerabilities.

Eric Schwake, Director of Cybersecurity Strategy at Salt Security:

The swift ascent of DeepSeek and its R1 AI model has stirred significant excitement in the tech sector, underscoring the potential for major upheavals within the AI environment. Although the company’s assertions regarding cost-effectiveness are notable, the abrupt surge in popularity alongside subsequent outages raises questions about the trustworthiness and security of their AI model.

From an API security standpoint, these outages and cyberattacks emphasize the crucial need to safeguard AI-enabled applications and services. DeepSeek’s API presumably served a vital function in delivering its AI assistant, and the outages hint at possible vulnerabilities within the API that attackers may have exploited.

Enterprises contemplating integrating AI models, particularly from fledgling startups, must prioritize API security. This involves performing comprehensive security evaluations, establishing robust authentication and authorization protocols, and maintaining ongoing vigilance for possible vulnerabilities.

The swift embrace of AI models also brings up issues surrounding data privacy and intellectual property. Organizations should meticulously examine the terms of service for AI solutions, ensuring the protection and appropriate use of their data.

What is next? 

Dan Schiappa, Chief Product and Services Officer, Arctic Wolf:

People are already concerned around how much data social media firms have access to, most recently shown by rulings on TikTok, just imagine what the risks could be with Chinese Foundational models being trained on all your data. Considering DeepSeek is already limiting its registrations due to a cyber attack, you have to wonder whether they have the appropriate security and policies in place to maintain your privacy. Likewise, China could continue their trend of IP theft and replicating U.S. and European technologies. 

There is no doubt this will start an AI arms race. President Trump already highlighted the $500 billion investment coming to America with a partnership of Softbank, Oracle and OpenAI, and many others will jump on board. In short, this will force an innovation boom. However, the Sputnik description is a perfect analogy. While Russia was first to space, the West dominated after that, and through the nuclear arms race with Russia and the Strategic Defense Initiative under Reagan, the Soviet Union was bankrupted. While China will be near impossible to bankrupt, the West can collectively out-innovate them and hope that companies who wish to maintain their security will be hesitant to use an AI Chatbot from China — which will no doubt steal their data and be vulnerable to an attack as we are already seeing.

Andrew Bolster, Senior R&D Manager at Black Duck:

The release of DeepSeek undeniably showcases the immense potential of open-source AI. By making such a powerful model available under an MIT license, it not only democratizes access to cutting-edge technology but also fosters innovation and collaboration across the global AI community.

However, DeepSeek’s rumored use of OpenAI Chain of Thought data for its initial training highlights the importance of transparency and shared resources in advancing AI. In the context of ‘Open Source AI,’ it’s crucial that the underlying training and evaluation data are open, as well as the initial architecture and the resultant model weights. 

DeepSeek’s achievement in AI efficiency (leveraging a clever Reinforcement Learning-based multi-stage training approach, rather than the current trend of using larger datasets for bigger models), signals a future where AI is accessible beyond the billionaire-classes. Open-source AI, with its transparency and collective development, often outpaces closed source alternatives in terms of adaptability and trust. As more organizations recognize these benefits, we could indeed see a significant shift towards open-source AI, driving a new era of technological advancement.

https://www.securitymagazine.com/articles/101337-deepseek-what-to-know-about-the-chinese-artificial-intelligence-model

Lascia un commento