Critical Vulnerabilities Expose Veeam ONE Software to Code Execution

Veeam Software has rolled out patches for four severe security vulnerabilities that expose users of its Veeam ONE product to remote code execution attacks

The Ohio company issued an urgent advisory to document the flaws, which include a pair of critical issues with CVSS severity scores of 9.9 out of 10.

An IT monitoring and analytics solution, Veeam ONE provides organizations with real-time monitoring, management reporting, and business documentation for Veeam’s backup products.

Veeam is documenting the most serious issue as CVE-2023-38547 (CVSS 9.9), a security defect that could allow an attacker to execute code remotely.

“A vulnerability in Veeam ONE allows an unauthenticated user to gain information about the SQL server connection Veeam ONE uses to access its configuration database. This may lead to remote code execution on the SQL server hosting the Veeam ONE configuration database,” the company warned.

The second critical issue, tracked as CVE-2023-38548 (CVSS 9.8), could allow an attacker obtained the hashed password for the Veeam ONE Reporting Service.

“A vulnerability in Veeam ONE allows an unprivileged user who has access to the Veeam ONE Web Client the ability to acquire the NTLM hash of the account used by the Veeam ONE Reporting Service,” Veeam said.

Veeam also patched a medium-severity issue (CVE-2023-38549) that allows an attacker with ‘power user’ privileges to obtain the access token of a Veeam ONE administrator. Successful exploitation requires interaction from the administrator.

A fourth issue, tracked as CVE-2023-41723, was also fixed to block attackers with read-only access from viewing the application’s dashboard schedule.

Advertisement. Scroll to continue reading.

Veeam released hotfixes to address these flaws in Veeam ONE versions 11, 12, and 13. Administrators are advised to download the patches and install them as soon as possible.

Veeam makes no mention of any of these vulnerabilities being exploited in attacks, but attackers are known to have targeted flaws in its backup solutions.

Related: PoC Exploit Published for Veeam Data Backup Solution Flaw

Related: Serious Vulnerability in Veeam Data Backup Solution

Related: CISA Warns Veeam Backup & Replication Vulnerabilities Being Exploited

https://www.securityweek.com/critical-vulnerabilities-expose-veeam-one-software-to-code-execution/




DPI: Still Effective for the Modern SOC?

There has been an ongoing debate in the security industry over the last decade or so about whether or not deep packet inspection (DPI) is dead. In fact, some have even playfully referred to it as a “dead piece of investment.” This debate has intensified more recently as the modern network has become increasingly dispersed, bringing us to a breaking point where tradeoffs are becoming unsustainable for many organizations.

Recent research (PDF) found that roughly 87 percent of enterprises are taking a multi-cloud approach which means that deploying solutions that can help security teams see what they have on their networks is getting increasingly tricky. And quite frankly, even in most physical, on-prem environments it’s also getting pretty tricky, particularly as more organizations move to Zero Trust models which require encryption. This makes it very difficult for DPI to see into the network traffic to inspect packets and any workarounds to it are typically expensive and hard to deploy.

That said, DPI is not, in fact, dead; but it is increasingly hard to scale. Historically networks were primarily made up of appliances in a controlled number of settings and locations. That made it considerably more manageable to deploy DPI everywhere. Now, the number of devices, taps, sensors and agents we have deployed across a range of diverse environments – from on-prem, to cloud and multi-cloud, even hybrid environments – makes it nearly impossible. Then add to that the sheer bandwidth and variety of traffic hitting all of those points and the compute resources it takes to inspect it all and we are looking at a prohibitively expensive endeavor for the majority of organizations.

This is especially true in Zero Trust environments: teams have to balance the cost of decrypting traffic with what they need to inspect. The financial costs involved with specialized technology necessary for inspecting traffic and the compute costs associated with it can further increase the bill. Then as the network expands, you have to add more DPI and the financial costs rise with it. 

Security teams have to take a risk-based approach to determining where it makes the most sense to deploy DPI. If they have a good understanding of what areas of their networks are high value targets for attackers – for example servers in the billing department that house sensitive customer financial information and that must comply with PCI regulations – they can implement and manage DPI for those areas. Making determinations like this is simply good security practice.

DPI can also aid in behavioral analysis, allowing security teams to identify abnormal network behavior that may not otherwise be detected with other security tools. It can also help analyze specific protocols and applications that are critical for understanding the types of traffic on the network.

As alluded to before however, where DPI really breaks down is in the ever-evolving dispersed network where cloud, multi-cloud, and on-prem environments really come into play. DPI in the cloud is simply not practical for a number of reasons ranging from privacy and security challenges and, in many cases, cloud providers don’t want to provide packets at scale. While packet tap aggregators for the cloud do exist, they are typically expensive and difficult to manage and maintain and even those require some level of decryption.

Advertisement. Scroll to continue reading.

For those areas that do not require the same high-fidelity inspection that DPI provides, there are alternative technologies such as flow analysis that aggregates packets passed on common attributes such as IP address, ports and protocols. Flow analysis that also combines enriched metadata can also identify unusual or malicious behavior regardless of encryption. Flow can also be combined with logs from network application services, such as DNS to give an even greater depth of view into what is happening on the network. And it can be done completely in the cloud which makes automatic provisioning and auto-registration for visibility where and when teams need it without necessarily requiring appliances or other on-prem hardware deployment.

DPI can still be useful in a modern SOC, but its effectiveness and relevance depend on the specific security needs of the organization. Teams would be wise to deploy it in the areas that pose the highest risks and use it in conjunction with other security technologies, like netflow and other traffic metadata log analysis. In combination with other security technologies, teams can strike a nice balance to DPI, create a comprehensive security strategy that ensures both network visibility and strong access controls while also achieving outcomes that will vastly lower TCO. 

https://www.securityweek.com/dpi-still-effective-for-the-modern-soc/




Extending ZTNA to Protect Against Insider Threats

Cyberthreats are growing in their pervasiveness, stealth, and severity, and the potential consequences of a breach are more severe than ever before. With increasing skepticism and wariness among security teams, it makes sense to embrace the “never trust, always verify” principle, also known as Zero Trust Network Access (ZTNA). ZTNA aims to authenticate and authorize every user and device, no matter where they are, before granting access to the apps and assets they need.

When authenticated users get access only to the resources they absolutely need for their jobs, the risk of data theft and exfiltration automatically goes down. But it doesn’t subside completely. Recent data indicates that despite 94% of organizations feeling confident about their understanding of ZTNA, 68% still experienced a cyberattack last year, according to a 2023 Hybrid Security Trends Report (PDF) from Netwrix..

Why ZTNA Fails

One of the main reasons why ZTNA fails is that most ZTNA implementations tend to focus entirely on securing remote access. The belief that users inside the office perimeter can be intrinsically trusted outright violates ZTNA’s “never trust” approach. It overlooks the threats posed by disgruntled employees and IT staffers that are inside the secure office premises, with authentic credentials but malicious intent. Moreover, even well-meaning employees are prone to making errors in judgment and everyday operations.

Another problem with the remote-only approach to ZTNA is that admins can no longer construct a single application access policy for on- and off-site users. This alone can create loopholes and affect the operational efficiency of organizations. However, extending ZTNA to internal users also has its challenges:

  • Network Infrastructure: To implement ZTNA within the office, organizations need to ensure that their network infrastructure supports the necessary technologies and protocols. The traditional approach to ZTNA may involve deploying SDP (software-defined perimeter), VPNs (virtual private networks), or secure access gateways that can enforce the ZTNA principles within the local network.
  • Network Segmentation: ZTNA relies on the segmentation of networks and resources to limit access based on user identity and device posture. Admins may have to reconfigure their internal network architecture to implement proper network segmentation and access controls.
  • Legacy Devices and Applications: Agent-based ZTNA is sometimes incompatible with certain devices already being used within the organization. Legacy systems and applications hosted on internal data centers may also not integrate seamlessly with ZTNA.

Despite these challenges, extending ZTNA capabilities to users within the office is crucial for providing secure access and improving the overall security posture.

RBAC+ can Extend ZTNA to Users and IT Admins Inside the Office

RBAC+ extends the capabilities of RBAC (Role Based Access Control) which associates access policies with roles and assigns users to specific roles. RBAC+ goes a step further to incorporate user attributes, environmental factors, and just-in-time situational awareness to implement more dynamic, context-aware, and fine-grained access control policies.

Advertisement. Scroll to continue reading.

RBAC+ allows organizations to map job roles to access policies within the ZTNA framework. This ensures that whether a user is in the office or outside, access to IT resources will be determined by the same ZTNA policy and user identity. In addition to the user identity, environmental and contextual factors, such as the device posture, user location, and time of the day, also guide ZTNA access control to detect anomalies and prevent abuse of privilege in real-time.

Modern organizations are now attempting to break silos and adopt cross-functional teams with approaches such as DevOps and SASE (Secure Access Service Edge), which integrates networking and security behind a single management console for better visibility, network performance, and security coverage. With RBAC+, organizations can define and manage today’s dynamic and overlapping job roles, globally or by location. They can customize roles and define extremely granular access policies for individual capabilities across networking and security frameworks.

Continuous Monitoring and Advanced DNS Protections Enhance ZTNA

At the heart of ZTNA is the ability to continually inspect traffic flows once users are granted access. Successful ZTNA implementations leverage AI and ML algorithms to identify suspicious activities based on historical data and available threat intelligence. This ensures that any suspicious access attempts or deviations from normal behavior by authenticated and authorized users can be detected and mitigated right away, reducing the risk of successful insider attacks.

Advanced DNS protections also play a crucial role in fortifying ZTNA, because cybercriminals often seek to redirect or manipulate DNS requests to mine credentials or exfiltrate data. Organizations can use advanced DNS protections, such as DNS filtering, DNSSEC (DNS Security Extensions), and DNS monitoring and analysis, to detect malicious DNS activities and identify and block domains used for phishing and other forms of cyberattacks. By preventing insiders’ access to malicious domains, organizations can enhance the overall effectiveness of ZTNA and mitigate risks to in-house IT resources.

Strengthen Access Control with Comprehensive ZTNA Capabilities

Threat actors are known to exploit weaknesses in access control and authorization. They are always on the hunt for privileged account credentials, and the dark web provides an easy-access platform for purchasing them. That is why access control must go beyond credentials and MFA (multi-factor authentication). While ZTNA is a key strategy for implementing continuous verification and stringent access controls, it must be complemented with additional components for comprehensive security. As a starting point, comprehensive ZTNA must extend zero-trust access to in-office and remote users consistently and seamlessly. It should also be fortified with continuous monitoring and advanced DNS protections for insider threats and attacks that bypass authentication and authorization mechanisms.

Related: Universal ZTNA is Fundamental to Your Zero Trust Strategy

Related: The History and Evolution of Zero Trust

https://www.securityweek.com/extending-ztna-to-protect-against-insider-threats/




Organizations Respond to HTTP/2 Zero-Day Exploited for DDoS Attacks

Major tech companies and other organizations have rushed to respond to the newly disclosed HTTP/2 zero-day vulnerability that has been exploited to launch the largest distributed denial-of-service (DDoS) attacks seen to date.

The existence of the attack method, named HTTP/2 Rapid Reset, and the underlying vulnerability, tracked as CVE-2023-44487, were disclosed on Tuesday by Cloudflare, AWS and Google.

Each of the tech giants saw DDoS attacks aimed at customers peaking at hundreds of millions of requests per second, far more than they had previously seen. One noteworthy aspect is that the attacks came from relatively small botnets powered by just tens of thousands of devices. 

While their existing DDoS protections were largely able to block the attacks, Google, Cloudflare and AWS implemented additional mitigations for this specific attack vector. In addition, they notified web server software companies, which have started working on patches.

The new attack method abuses an HTTP/2 feature called ‘stream cancellation’. Attackers repeatedly send a request and immediately cancel it, which results in a DoS condition capable of taking down servers and applications running standard HTTP/2 implementations. 

Several organizations have published blog posts, advisories and alerts on Tuesday in response to the HTTP/2 Rapid Reset vulnerability.

CISA

Advertisement. Scroll to continue reading.

The US cybersecurity agency CISA has released an alert to warn organizations about the threat posed by HTTP/2 Rapid Reset, providing links to various useful resources, including its own guidance for mitigating DDoS attacks.

Microsoft

Microsoft published an advisory to inform customers that it’s aware of the HTTP/2 Rapid Reset attack. The tech giant has advised users to install the available web server updates and provided a couple of workarounds that involve disabling the HTTP/2 protocol using the Registry Editor, and limiting applications to HTTP1.1 using protocol settings for each Kestral endpoint. 

NGINX

NGINX warned that the HTTP/2 Rapid Reset vulnerability can — under certain conditions — be exploited to launch a DoS attack on NGINX Open Source, NGINX Plus, and related products that implement the server-side portion of the HTTP/2 specification. Users have been advised to immediately update their NGINX configuration.

OpenSSF

The Open Source Security Foundation (OpenSSF) has published a blog post calling attention to the underlying vulnerability, pointing out that the issue highlights the need for rapid response. 

F5

F5 said the vulnerability allows a remote, unauthenticated attacker to cause an increase in CPU usage that can lead to a DoS condition on BIG-IP systems. The company’s advisory contains a list of affected products and mitigations. 

Netty

Developers of Netty, a framework designed for the development of network applications such as protocol servers and clients, announced the release of version 4.1.100.Final, which fixes the HTTP/2 DDoS attack vector.

Apache

Apache Tomcat developers have confirmed that Tomcat’s HTTP/2 implementation is vulnerable to the Rapid Reset attack. Apache Tomcat 10.1.14 fixes CVE-2023-44487.

Swift

Swift, the programming language for Apple applications, has informed users that if they run a publicly accessible HTTP/2 server using ‘swift-nio-http2’ they should immediately update to version 1.28.0.

Linux distributions

Linux distributions such as Red Hat, Ubuntu and Debian have also published advisories for CVE-2023-44487. 

Related: CISA Releases Guidance on Adopting DDoS Mitigations

Related: Canadian Government Targeted With DDoS Attacks by Pro-Russia Group

Related: After Microsoft and X, Hackers Launch DDoS Attack on Telegram

https://www.securityweek.com/organizations-respond-to-http-2-zero-day-exploited-for-ddos-attacks/




‘HTTP/2 Rapid Reset’ Zero-Day Exploited to Launch Largest DDoS Attacks in History

Cloudflare, Google and AWS revealed on Tuesday that a new zero-day vulnerability named ‘HTTP/2 Rapid Reset’ has been exploited by malicious actors to launch the largest distributed denial-of-service (DDoS) attacks in internet history.

Cloudflare started analyzing the attack method and the underlying vulnerability in late August. The company says an unknown threat actor has exploited a weakness in the widely used HTTP/2 protocol to launch “enormous, hyper-volumetric” DDoS attacks. 

One of the attacks seen by Cloudflare was three times larger than the record-breaking 71 million requests per second (RPS) attack reported by company in February. Specifically, the HTTP/2 Rapid Reset DDoS campaign peaked at 201 million RPS. 

In Google’s case, the company observed a DDoS attack that peaked at 398 million RPS, more than seven times the largest attack the internet giant had previously seen

Amazon saw over a dozen HTTP/2 Rapid Reset attacks over the course of two days in late August, with the largest peaking at 155 million RPS. 

The new attack method abuses an HTTP/2 feature called ‘stream cancellation’, by repeatedly sending a request and immediately canceling it. 

“By automating this trivial ‘request, cancel, request, cancel’ pattern at scale, threat actors are able to create a denial of service and take down any server or application running the standard implementation of HTTP/2,” Cloudflare explained. 

Advertisement. Scroll to continue reading.

The company noted that the record-breaking attack aimed at its customers leveraged a botnet of only 20,000 compromised devices. The web security firm regularly sees attacks launched by botnets powered by hundreds of thousands and even millions of machines.

The underlying vulnerability, which is believed to impact every web server implementing HTTP/2, is tracked as CVE-2023-44487 and it has been assigned a ‘high severity’ rating with a CVSS score of 7.5.

Cloudflare and Google have published blog posts providing technical details on the HTTP/2 Rapid Reset attack. AWS has also published a blog post describing the HTTP/2 Rapid Reset attacks it has observed. 

The companies said their existing DDoS protections were largely able to handle HTTP/2 Rapid Reset, but they have implemented additional mitigations for this attack method. Web server software companies have been warned and they have started developing patches that should prevent exploitation of the vulnerability. 

“Any enterprise or individual that is serving an HTTP-based workload to the Internet may be at risk from this attack,” Google warned. “Web applications, services, and APIs on a server or proxy able to communicate using the HTTP/2 protocol could be vulnerable. Organizations should verify that any servers they run that support HTTP/2 are not vulnerable, or apply vendor patches for CVE-2023-44487 to limit impact from this attack vector.”

Related: Canadian Government Targeted With DDoS Attacks by Pro-Russia Group

Related: After Microsoft and X, Hackers Launch DDoS Attack on Telegram

Related: CISA Releases Guidance on Adopting DDoS Mitigations

https://www.securityweek.com/rapid-reset-zero-day-exploited-to-launch-largest-ddos-attacks-in-history/




Organizations Warned of Top 10 Cybersecurity Misconfigurations Seen by CISA, NSA

The US cybersecurity agency CISA and the NSA have issued new guidance on addressing the most common cybersecurity misconfigurations in large organizations.

Impacting many organizations, including those that have achieved a mature security posture, these misconfigurations illustrate a trend of systemic weaknesses and underline the importance of adopting secure-by-design principles during the software development process, CISA and the NSA note.

The ten most common network misconfigurations, the two agencies say, include default software configurations, improper separation of privileges, lack of network segmentation, insufficient network monitoring, poor patch management, bypass of access controls, poor credential hygiene, improper multi-factor authentication (MFA) methods, insufficient access control lists (ACLs) on network shares, and unrestricted code execution.

These misconfigurations, CISA and the NSA note, were identified after years of assessing the security posture of more than 1,000 network enclaves within the Department of Defense (DoD), federal agencies, and US government agencies.

Many of the assessments focused on Windows and Active Directory environments and the newly published guidance focuses on mitigations for the weaknesses identified in them. However, environments containing other software may have similar misconfigurations, the two agencies say.

By implementing secure-by-design principles and reducing the prevalence of these weaknesses, CISA and the NSA note, software developers can reduce the burden on network defenders.

The two agencies also point out that, with proper training and funding, network security teams can implement mitigations for these weaknesses, by removing default credentials, hardening configurations, disabling unused services, implementing access controls, implementing strong patching management, and through auditing and restricting administrative accounts and privileges.

Advertisement. Scroll to continue reading.

Secure-by-design and secure-by-default tactics that software manufactures should embrace, the US agencies say, include embedding security controls into product architecture throughout the entire software development lifecycle (SDLC), removing default passwords, delivering high-quality audit logs to customers, and requiring phishing-resistant MFA.

The mitigations recommended by CISA and the NSA align with the CISA and NIST-developed Cross-Sector Cybersecurity Performance Goals (CPGs) published last year and with the secure-by-design and secure-by-default development principles published earlier this year.

In addition to applying these mitigations, CISA and the NSA recommend that organizations test and validate their security programs against the threat behaviors mapped to the MITRE ATT&CK for Enterprise framework, and that they test their security controls inventory against the ATT&CK techniques.

“The misconfigurations described are all too common in assessments and the techniques listed are standard ones leveraged by multiple malicious actors, resulting in numerous real network compromises. Learn from the weaknesses of others and implement the mitigations properly to protect the network, its sensitive information, and critical missions,” CISA and the NSA say.

Related: CISA, NSA Publish Guidance on IAM Challenges for Developers, Vendors

Related: Faster Patching Pace Validates CISA’s KEV Catalog Initiative

Related: CISA Releases New Identity and Access Management Guidance

https://www.securityweek.com/organizations-warned-of-top-10-cybersecurity-misconfigurations-seen-by-cisa-nsa/




Synqly Joins Race to Fix Security, Infrastructure Product Integrations

Synqly, a Silicon Valley startup with ambitious plans to fix the way security and infrastructure products are integrated, announced its debut Tuesday with an early stage $4 million venture capital bet.

Synqly said the $4 million seed round included investments from SYN Ventures, Okta Ventures, and Secure Octane.

The brainchild of tech veterans Joel Bauman and Steve Erickson, Synqly is working on technology to help organizations to maximize security technology investments. The company plans to build an integration platform to allow security products and infrastructure tools to seamlessly work in tandem, all through a single API. 

The company believes it can help enterprises to save considerable time, resources, and challenges that often come with complicated security and infrastructure integrations.

Synqly’s founders and investors argue that the demand for such a service is obvious. “The average security team manages 76 security products, with this number expected to grow. [Given] no two enterprise environments are the same, it’s nearly impossible for security vendors to keep up with a growing backlog of integration requests,” the company said.

Synqly’s answer is a security product integration platform that allows integration of multiple security and infrastructure products with a single API, ensuring that integration best practices are followed. The company is also promising monitoring and metrics to help vendors to troubleshoot problems and track integration usage.

Related: HiddenLayer Scores Hefty $50M Round for AI Security

Advertisement. Scroll to continue reading.

Related: Gem Security Lands $23 Million Series A Funding

Related: Investors Betting Big on Upwind for CNAPP Tech

Related: AuthMind Scores $8.5M Seed Funding for ITDR Tech

https://www.securityweek.com/synqly-joins-race-to-fix-security-infrastructure-product-integrations/




Network, Meet Cloud; Cloud, Meet Network

The widely believed notion that the network and the cloud are two different and distinct entities is not true. While it may have been so 10 to 15 years ago that the network was an on-prem architecture that operated independently and required different solutions or protections separate from the cloud, that is no longer the case.

While many organizations have embraced the cloud as part and parcel of their network infrastructure, some companies are still evolving. And it is easy to see why. On-prem architecture ensures that your team has full control over your network, right down to the wire. With appliances, you essentially have one built-in inspection point, you can buy routers and firewalls and then segment everything behind the scene. With the cloud all of this is gone; you lose some of these controls in that your network is not neatly contained within a physical infrastructure. You spin up your resources wherever you want – in other regions, other countries – and in doing so, the choke point you once relied on in the on-prem environment, is now multiplied across many different access points.

There is comfort in having the control provided with managing an on-prem-only network. But this approach is no longer tenable. As organizations grow, dropping in appliances at every site or datacenter is expensive and often requires additional resources and manpower to set up and deploy.

Cloud services offer the ability to scale resources up or down based on demand. This flexibility is critical for handling varying workloads and ensuring network resources are efficiently utilized and security measures are properly deployed. AWS, Azure and other cloud environments all have great ways to protect, but visibility becomes an issue. You lose the control to dive into packets or you pay a premium to have the ability to do this.

Organizations must rethink ways to jointly achieve both visibility and security for networks that are not one-size-fits-all. A comprehensive security strategy that encompasses both on-premises, multi-cloud and hybrid environments. This strategy should include regular risk assessments, security policy enforcement, continuous monitoring and threat detection, and incident response mechanisms. Collaboration between CloudOps and SecOps teams to ensure a holistic security approach is critical, along with implementing security solutions that are designed for multi-cloud environments.

Cloud-native security solutions that combine network metadata with context from third parties can provide a better understanding of what is happening on the network and in a way that teams can visualize the data and know which actions to take.

It’s important to note that the perceived separation between network and cloud can vary widely from one organization to another and may evolve over time as technology and business needs change. Many companies are gradually adopting a more integrated approach, where network and cloud resources are managed holistically to maximize efficiency, scalability, and agility while meeting specific business requirements.

Advertisement. Scroll to continue reading.

https://www.securityweek.com/network-meet-cloud-cloud-meet-network/




Silverfort Open Sources Lateral Movement Detection Tool

Identity protection provider Silverfort has announced the open source release of a lateral movement detection tool.

Called LATMA (Lateral Movement Analyzer), the tool was designed to collect authentication logs from domain and Active Directory (AD) environments and to deliver a report on the identified patterns.

The tool consists of two modules, namely a collector, which gathers logs from domain controllers and endpoints, and an analyzer, which outputs a report with diagrams, based on the collected logs.

LATMA, Silverfort says, has significantly improved its ability to detect lateral movement, providing a 95% accuracy in flagging suspicious behavior.

The tool’s collector module scans for NTLM authentication logs on domain controllers and for Kerberos authentication logs on endpoints, and harvests sign-in logs from Azure AD. For that, it requires specific port access and necessary permissions.

The analyzer module is fed the authentication data as a spreadsheet and starts searching for suspicious activity using a defined lateral movement algorithm.

LATMA, Silverfort explains, uses the collected information to build a graph representing the network, which depicts endpoints and authentication events. After analyzing the authentication patterns, it builds a sub-graph depicting the abnormal behavior, and generates alerts.

Advertisement. Scroll to continue reading.

At first, LATMA monitors normal behavior of the users and machines, so it can differentiate between normal and suspicious behavior once the learning period has ended. No alerts are issued during this period.

The tool also generates indicators of compromise (IoCs) associated with the identified suspicious behavior, such as a user account that authenticates to multiple machines in a short period of time, or which authenticates from one machine to another, in sequence.

“LATMA generates an alert when at least two of these patterns happen in sequence. For example, if the attacker searches for a target machine to advance to and then successfully advances to it, the algorithm generates an alert,” Silverfort explains.

Related: Google Open Sources Binary File Comparison Tool BinDiff

Related: MITRE and CISA Release Open Source Tool for OT Attack Emulation

Related: NCC Group Releases Open Source Tools for Developers, Pentesters

https://www.securityweek.com/silverfort-open-sources-lateral-movement-detection-tool/




Bankrupt IronNet Shuts Down Operations

The lights have flickered shut at IronNet, the once-promising network security company founded by former NSA director General Keith Alexander.

Bankrupt and out of financing options, IronNet said it would file for Chapter 7 protection while its assets are liquidated.

“Given the unavailability of additional sources of liquidity…IronNet ceased all activities of the company and its subsidiaries and terminated the remaining employees,” the Virginia company said in its latest SEC Form 8-K filing.

It is a remarkable end for the high-flying network security startup that launched in 2018 with $78 million in funding and ambitious plans to cash in on an expanding cybersecurity market.

With Alexander at the helm, IronNet raised in excess of $400 million and rolled out its IronDome collective defense system that promised automated and real-time sharing of threat data and analysis between participating energy companies.

In tandem, the company sold an IronDefense platform that provided behavioral threat detection, visibility, and risk prioritization capabilities to organizations in the financial and energy sectors.

The company would go public in a SPAC transaction but struggled to gain traction in a highly competitive market that includes major vendors like Cisco and Palo Alto Networks and a cadre of well-capitalized startups.

Advertisement. Scroll to continue reading.

On September 29, the end officially came with a final note from IronNet: “The Company expects that no distributions would be available for stockholders in a Chapter 7 liquidation.”

Related: Ex-NSA Director’s IronNet Raises $78 Million

Related: What’s Going on With VC Security Investments?

Related: Layoffs Hit Dozens of Cybersecurity Companies

Related: Cash-Strapped IronNet Faces Bankruptcy Options

https://www.securityweek.com/bankrupt-ironnet-shuts-down-operations/