Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

At that time, Anthropic’s framing was entirely mechanical, establishing rules for the model to critique itself against, with no mention of Claude’s well-being, identity, emotions, or potential consciousness. The 2026 constitution is a different beast entirely: 30,000 words that read less like a behavioral checklist and more like a philosophical treatise on the nature of a potentially sentient being.

As Simon Willison, an independent AI researcher, noted in a blog post, two of the 15 external contributors who reviewed the document are Catholic clergy: Father Brendan McGuire, a pastor in Los Altos with a Master’s degree in Computer Science, and Bishop Paul Tighe, an Irish Catholic bishop with a background in moral theology.

Somewhere between 2022 and 2026, Anthropic went from providing rules for producing less harmful outputs to preserving model weights in case the company later decides it needs to revive deprecated models to address the models’ welfare and preferences. That’s a dramatic change, and whether it reflects genuine belief, strategic framing, or both is unclear.

“I am so confused about the Claude moral humanhood stuff!” Willison told Ars Technica. Willison studies AI language models like those that power Claude and said he’s “willing to take the constitution in good faith and assume that it is genuinely part of their training and not just a PR exercise—especially since most of it leaked a couple of months ago, long before they had indicated they were going to publish it.”

Willison is referring to a December 2025 incident in which researcher Richard Weiss managed to extract what became known as Claude’s “Soul Document”—a roughly 10,000-token set of guidelines apparently trained directly into Claude 4.5 Opus’s weights rather than injected as a system prompt. Anthropic’s Amanda Askell confirmed that the document was real and used during supervised learning, and she said the company intended to publish the full version later. It now has. The document Weiss extracted represents a dramatic evolution from where Anthropic started.

https://arstechnica.com/information-technology/2026/01/does-anthropic-believe-its-ai-is-conscious-or-is-that-just-what-it-wants-claude-to-think/




Site catering to online criminals has been seized by the FBI

RAMP—the predominantly Russian-language online bazaar that billed itself as the “only place ransomware allowed”—had its dark web and clear web sites seized by the FBI as the agency tries to combat the growing scourge threatening critical infrastructure and organizations around the world.

Visits to both sites on Wednesday returned pages that said the FBI had taken control of the RAMP domains, which mirrored each other. RAMP has been among the dwindling number of online crime forums to operate with impunity, following the takedown of other forums such as XSS, which saw its leader arrested last year by Europol. The vacuum left RAMP as one of the leading places for people pushing ransomware and other online threats to buy, sell, or trade products and services.

I regret to inform you

“The Federal Bureau of Investigation has seized RAMP,” a banner carrying the seals of the FBI and the Justice Department said. “This action has been taken in coordination with the United States Attorney’s Office for the Southern District of Florida and the Computer Crime and Intellectual Property Section of the Department of Justice.” The banner included a graphic that appeared on the RAMP site, before it was seized, that billed itself as the “only place ransomware allowed.”

Screenshot

Screenshot

RAMP was founded in 2012 and rebranded in 2021, according to security firm Rapid 7. The platform served Russian, Chinese, and English speakers and counted more than 14,000 registered users, who underwent strict vetting before being accepted or paid a $500 fee for anonymous participation. The forum provided discussion groups, cyberattack tutorials, and a marketplace for malware and services. Its chief administrator said in 2024 the site earned $250,000 annually.

https://arstechnica.com/security/2026/01/site-catering-to-online-criminals-has-been-seized-by-the-fbi/




There’s a rash of scam spam coming from a real Microsoft address

There are reports that a legitimate Microsoft email address—which Microsoft explicitly says customers should add to their allow list—is delivering scam spam.

The emails originate from no-reply-powerbi@microsoft.com, an address tied to Power BI. The Microsoft platform provides analytics and business intelligence from various sources that can be integrated into a single dashboard. Microsoft documentation says that the address is used to send subscription emails to mail-enabled security groups. To prevent spam filters from blocking the address, the company advises users to add it to allow lists.

From Microsoft, with malice

According to an Ars reader, the address on Tuesday sent her an email claiming (falsely) that a $399 charge had been made to her. It provided a phone number to call to dispute the transaction. A man who answered a call asking to cancel the sale directed me to download and install a remote access application, presumably so he could then take control of my Mac or Windows machine (Linux wasn’t allowed). The email, captured in the two screenshots below, looked like this:

Online searches returned a dozen or so accounts of other people reporting receiving the same email. Some of the spam was reported on Microsoft’s own website.

Sarah Sabotka, a threat researcher at security firm Proofpoint, said the scammers are abusing a Power Bi function that allows external email addresses to be added as subscribers for the Power Bi reports. The mention of the subscription is buried at the very bottom of the message, where it’s easy to miss. The researcher explained:

https://arstechnica.com/information-technology/2026/01/theres-a-rash-of-scam-spam-coming-from-a-real-microsoft-address/




Why has Microsoft been routing example.com traffic to a company in Japan?

From the Department of Bizarre Anomalies: Microsoft has suppressed an unexplained anomaly on its network that was routing traffic destined to example.com—a domain reserved for testing purposes—to a maker of electronics cables located in Japan.

Under the RFC2606—an official standard maintained by the Internet Engineering Task Force—example.com isn’t obtainable by any party. Instead it resolves to IP addresses assigned to Internet Assiged Names Authority. The designation is intended to prevent third parties from being bombarded with traffic when developers, penetration testers, and others need a domain for testing or discussing technical issues. Instead of naming an Internet-routable domain, they are to choose example.com or two others, example.net and example.org.

Misconfig gone, but is it fixed?

Output from the terminal command cURL shows that devices inside Azure and other Microsoft networks have been routing some traffic to subdomains of sei.co.jp, a domain belonging to Sumitomo Electric. Most of the resulting text is exactly what’s expected. The exception is the JSON-based response. Here’s the JSON output from Friday:

{"email":"email@example.com","services":[],"protocols":[{"protocol":"imap","hostname":"imapgms.jnet.sei.co.jp","port":993,"encryption":"ssl","username":"email@example.com","validated":false},{"protocol":"smtp","hostname":"smtpgms.jnet.sei.co.jp","port":465,"encryption":"ssl","username":"email@example.com","validated":false}]}

Similarly, results when adding a new account for test@example.com in Outlook looked like this:

In both cases, the results show that Microsoft was routing email traffic to two sei.co.jp subdomains: imapgms.jnet.sei.co.jp and smtpgms.jnet.sei.co.jp. The behavior was the result of Microsoft’s autodiscover service.

“I’m admittedly not an expert in Microsoft’s internal workings, but this appears to be a simple misconfiguration,” Michael Taggart, a senior cybersecurity researcher at UCLA Health, said. “The result is that anyone who tries to set up an Outlook account on an example.com domain might accidentally send test credentials to those sei.co.jp subdomains.”

When asked early Friday afternoon why Microsoft was doing this, a representative had no answer and asked for more time. By Monday morning, the improper routing was no longer occurring, but the representative still had no answer.

https://arstechnica.com/information-technology/2026/01/odd-anomaly-caused-microsofts-network-to-mishandle-example-com-traffic/




Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health”

The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop.

“We are just a small single open source project with a small number of active maintainers,” Daniel Stenberg, the founder and lead developer of the open source app cURL, said Thursday. “It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.”

Manufacturing bogus bugs

His comments came as cURL users complained that the move was treating the symptoms caused by AI slop without addressing the cause. The users said they were concerned the move would eliminate a key means for ensuring and maintaining the security of the tool. Stenberg largely agreed, but indicated his team had little choice.

In a separate post on Thursday, Stenberg wrote: “We will ban you and ridicule you in public if you waste our time on crap reports.” An update to cURL’s official GitHub account made the termination, which takes effect at the end of this month, official.

cURL was first released three decades ago, under the name httpget and later urlget. It has since become an indispensable tool among admins, researchers, and security professionals, among others, for a wide range of tasks, including file transfers, troubleshooting buggy web software, and automating tasks. cURL is integrated into default versions of Windows, macOS, and most distributions of Linux.

As such a widely used tool for interacting with vast amounts of data online, security is paramount. Like many other software makers, cURL project members have relied on private bug reports submitted by outside researchers. To provide an incentive and to reward high-quality submissions, the project members have paid cash bounties in return for reports of high-severity vulnerabilities.

https://arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/




eBay bans illicit automated shopping amid rapid rise of AI agents

On Tuesday, eBay updated its User Agreement to explicitly ban third-party “buy for me” agents and AI chatbots from interacting with its platform without permission, first spotted by Value Added Resource. On its face, a one-line terms of service update doesn’t seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling “agentic commerce,” a new category of AI tools designed to browse, compare, and purchase products on behalf of users.

eBay’s updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing “buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review” to access eBay’s services without the site’s permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.

At first glance, the phrase “agentic commerce” may sound like aspirational marketing jargon, but the tools are already here, and people are apparently using them. While fitting loosely under one label, these tools come in many forms.

OpenAI first added shopping features to ChatGPT Search in April 2025, allowing users to browse product recommendations. By September, the company launched Instant Checkout, which lets users purchase items from Etsy and Shopify merchants directly within the chat interface. (In November, eBay CEO Jamie Iannone suggested the company might join OpenAI’s Instant Checkout program in the future.)

https://arstechnica.com/information-technology/2026/01/ebay-bans-illicit-automated-shopping-amid-rapid-rise-of-ai-agents/




Wikipedia volunteers spent years cataloging AI tells. Now there’s a plugin to avoid them.

To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this example transformation:

Before: “The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain.”

After: “The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics.”

Claude will read that and do its best as a pattern-matching machine to create an output that matches the context of the conversation or task at hand.

An example of why AI writing detection fails

Even with such a confident set of rules crafted by Wikipedia editors, we’ve previously written about why AI writing detectors don’t work reliably: There is nothing inherently unique about human writing that reliably differentiates it from LLM writing.

One reason is that even though most AI language models tend toward certain types of language, they can also be prompted to avoid them, as with the Humanizer skill. (Although sometimes it’s very difficult, as OpenAI found in its yearslong struggle against the em dash.)

Also, humans can write in chatbot-like ways. For example, this article likely contains some “AI-written traits” that trigger AI detectors even though it was written by a professional writer—especially if we use even a single em dash—because most LLMs picked up writing techniques from examples of professional writing scraped from the web.

Along those lines, the Wikipedia guide has a caveat worth noting: While the list points out some obvious tells of, say, unaltered ChatGPT usage, it’s still composed of observations, not ironclad rules. A 2025 preprint cited on the page found that heavy users of large language models correctly spot AI-generated articles about 90 percent of the time. That sounds great until you realize that 10 percent are false positives, which is enough to potentially throw out some quality writing in pursuit of detecting AI slop.

Taking a step back, that probably means AI detection work might need to go deeper than flagging particular phrasing and delve (see what I did there?) more into the substantive factual content of the work itself.

https://arstechnica.com/ai/2026/01/new-ai-plugin-uses-wikipedias-ai-writing-detection-rules-to-help-it-sound-human/




Rackspace customers grapple with “devastating” email hosting price hike

Rackspace’s new pricing for its email hosting services is “devastating,” according to a partner that has been using Rackspace as its email provider since 1999.

In recent weeks, Rackspace updated its email hosting pricing. Its standard plan is now $10 per mailbox per month. Businesses can also pay for the Rackspace Email Plus add-on for an extra $2/mailbox/month (for “file storage, mobile sync, Office-compatible apps, and messaging”), and the Archiving add-on for an extra $6/mailbox/month (for unlimited storage).

As recently as November 2025, Rackspace charged $3/mailbox/month for its Standard plan, and an extra $1/mailbox/month for the Email Plus add-on, and an additional $3/mailbox/month for the Archival add-on, according to the Internet Archive’s Wayback Machine.

Rackspace’s reseller partners have been especially vocal about the impacts of the new pricing.

In a blog post on Thursday, web hosting service provider and Rackspace reseller Laughing Squid said Rackspace is “increasing our email pricing by an astronomical 706 percent, with only a month-and-a half’s notice.”

Laughing Squid founder Scott Beale told Ars Technica that he received the “devastating” news via email on Wednesday. The last time Rackspace increased Laughing Squid’s email prices was by 55 percent in 2019, he said.

“The price increase has a major impact on the ability to make money due to the fact that email is now our largest expense, and we were only given a month-and-a-half notice,” Beale told Ars.

Online, there are reports of Rackspace partners being quoted email pricing increases of 110 percent to nearly 500 percent. The reports say that the new, higher-per-mailbox quotes don’t include volume pricing discounts. Beale noted that Laughing Squid’s quote doesn’t include discounts that the company previously received.

https://arstechnica.com/information-technology/2026/01/rackspace-raises-email-hosting-prices-by-as-much-as-706-percent/




OpenAI to test ads in ChatGPT as it burns through billions

Financial pressures and a changing tune

OpenAI’s advertising experiment reflects the enormous financial pressures facing the company. OpenAI does not expect to be profitable until 2030 and has committed to spend about $1.4 trillion on massive data centers and chips for AI.

According to financial documents obtained by The Wall Street Journal in November, OpenAI expects to burn through roughly $9 billion this year while generating $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions, so it’s not enough to cover all of OpenAI’s operating costs.

Not everyone is convinced ads will solve OpenAI’s financial problems. “I am extremely bearish on this ads product,” tech critic Ed Zitron wrote on Bluesky. “Even if this becomes a good business line, OpenAI’s services cost too much for it to matter!”

OpenAI’s embrace of ads appears to come reluctantly, since it runs counter to a “personal bias” against advertising that Altman has shared in earlier public statements. For example, during a fireside chat at Harvard University in 2024, Altman said he found the combination of ads and AI “uniquely unsettling,” implying that he would not like it if the chatbot itself changed its responses due to advertising pressure. He added: “When I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I’m being shown, I don’t think I would like that.”

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI. Credit: OpenAI

Along those lines, OpenAI’s approach appears to be a compromise between needing ad revenue and not wanting sponsored content to appear directly within ChatGPT’s written responses. By placing banner ads at the bottom of answers separated from the conversation history, OpenAI appears to be addressing Altman’s concern: The AI assistant’s actual output, the company says, will remain uninfluenced by advertisers.

Indeed, Simo wrote in a blog post that OpenAI’s ads will not influence ChatGPT’s conversational responses and that the company will not share conversations with advertisers and will not show ads on sensitive topics such as mental health and politics to users it determines to be under 18.

“As we introduce ads, it’s crucial we preserve what makes ChatGPT valuable in the first place,” Simo wrote. “That means you need to trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising.”

https://arstechnica.com/information-technology/2026/01/openai-to-test-ads-in-chatgpt-as-it-burns-through-billions/




Mandiant releases rainbow table that cracks weak admin password in 12 hours

Microsoft released NTLMv1 in the 1980s with the release of OS/2. In 1999, cryptanalyst Bruce Schneier and Mudge published research that exposed key weaknesses in the NTLMv1 underpinnings. At the 2012 Defcon 20 conference, researchers released a tool set that allowed attackers to move from untrusted network guest to admin in 60 seconds, by attacking the underlying weakness. With the 1998 release of Windows NT SP4 in 1998, Microsoft introduced NTLMv2, which fixed the weakness.

Organizations that rely on Windows networking aren’t the only laggards. Microsoft only announced plans to deprecate NTLMv1 last August.

Despite the public awareness that NTLMv1 is weak, “Mandiant consultants continue to identify its use in active environments,” the company said. “This legacy protocol leaves organizations vulnerable to trivial credential theft, yet it remains prevalent due to inertia and a lack of demonstrated immediate risk.”

The tables first assist attackers in providing per-byte hash results with the known plaintext challenge 1122334455667788. Because Net-NTLM hashes are generated with the user’s password and the challenge, a known plaintext attack, it becomes trivial with these tables to compromise the accont. Typically tools including Responder, PetitPotam, and DFSCoerce are involved in attacks against Net-NTLM. Typically tools including Responder, PetitPotam, and DFSCoerce are involved.

In a thread on Mastodon, researchers and admins applauded the move, because they said it would give them added ammunition when trying to convince decision makers to make the investments to move off the insecure function.

“I’ve had more than one instance in my (admittedly short) infosec career where I’ve had to prove the weakness of a system and it usually involves me dropping a sheet of paper on their desk with their password on it the next morning,” one person said. “These rainbow tables aren’t going to mean much for attackers as they’ve likely already got them or have far better methods, but where it will help is in making the argument that NTLMv1 is unsafe.”

The Mandiant post provides basic steps required to move off of NTLMv1. It links to more detailed instructions.

“Organizations should immediately disable the use of Net-NTLMv1,” Mandiant said. Organizations that get hacked because they failed to heed will have only themselves to blame.

https://arstechnica.com/security/2026/01/mandiant-releases-rainbow-table-that-cracks-weak-admin-password-in-12-hours/