YouTube denies AI was involved with odd removals of tech tutorials

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

https://arstechnica.com/tech-policy/2025/10/youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials/




FCC to rescind ruling that said ISPs are required to secure their networks

The Federal Communications Commission will vote in November to repeal a ruling that requires telecom providers to secure their networks, acting on a request from the biggest lobby groups representing Internet providers.

FCC Chairman Brendan Carr said the ruling, adopted in January just before Republicans gained majority control of the commission, “exceeded the agency’s authority and did not present an effective or agile response to the relevant cybersecurity threats.” Carr said the vote scheduled for November 20 comes after “extensive FCC engagement with carriers” who have taken “substantial steps… to strengthen their cybersecurity defenses.”

The FCC’s January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, “affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications.”

“The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will ‘illegally activate interceptions or other forms of surveillance within the carrier’s switching premises without its knowledge,’” the January order said. “With this Declaratory Ruling, we clarify that telecommunications carriers’ duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks.”

ISPs get what they want

The declaratory ruling was paired with a Notice of Proposed Rulemaking that would have led to stricter rules requiring specific steps to secure networks against unauthorized interception. Carr voted against the decision at the time.

Although the declaratory ruling didn’t yet have specific rules to go along with it, the FCC at the time said it had some teeth. “Even absent rules adopted by the Commission, such as those proposed below, we believe that telecommunications carriers would be unlikely to satisfy their statutory obligations under section 105 without adopting certain basic cybersecurity practices for their communications systems and services,” the January order said. “For example, basic cybersecurity hygiene practices such as implementing role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication are necessary for any sensitive computer system. Furthermore, a failure to patch known vulnerabilities or to employ best practices that are known to be necessary in response to identified exploits would appear to fall short of fulfilling this statutory obligation.”

https://arstechnica.com/tech-policy/2025/10/fcc-dumps-plan-for-telecom-security-rules-that-internet-providers-dont-like/




AT&T sues ad industry watchdog instead of pulling ads that slam T-Mobile

AT&T claims rule no longer applies

AT&T’s claim that it didn’t violate an NAD rule hinges partly on when its press release was issued. The carrier claims the rule against referencing NAD decisions only applies for a short period of time after each NAD ruling.

“NAD now takes the remarkable position that any former participant in an NAD proceeding is forever barred from truthfully referencing NAD’s own public findings about a competitor’s deceptive advertising,” AT&T said. The lawsuit argued that “if NAD’s procedures were ever binding on AT&T, their binding effect ceased at the conclusion of the proceeding or a reasonable time thereafter.”

AT&T also slammed the NAD for failing to rein in T-Mobile’s deceptive ads. The group’s slow process let T-Mobile air deceptive advertisements without meaningful consequences, and the “NAD has repeatedly failed to refer continued violations to the FTC,” AT&T said.

“Over the past several years, NAD has repeatedly deemed T-Mobile’s ads to be misleading, false, or unsubstantiated,” AT&T said. “But over and over, T-Mobile has gamed the system to avoid timely redressing its behavior. NAD’s process is often slow, and T-Mobile knows it can make that process even slower by asking for extensions and delaying fixes.”

We’ve reported extensively on both carriers’ history of misleading advertisements over the years. That includes T-Mobile promising never to raise prices on certain plans and then raising them anyway. AT&T used to advertise 4G LTE service as “5GE,” and was rebuked for an ad that falsely claimed the carrier was already offering cellular coverage from space. AT&T and T-Mobile have both gotten in trouble for misleading promises of unlimited data.

AT&T says vague ad didn’t violate rule

AT&T’s lawsuit alleged that the NAD press release “intentionally impl[ied] that AT&T mischaracterized NAD’s prior decisions about T-Mobile’s deceptive advertising.” However, the NAD’s public stance is that AT&T violated the rule by using NAD decisions for promotional purposes, not by mischaracterizing the decisions.

https://arstechnica.com/tech-policy/2025/10/att-sues-ad-industry-watchdog-instead-of-pulling-ads-that-slam-t-mobile/




Trump admin demands states exempt ISPs from net neutrality and price laws

The NTIA decision not to give funds to states that enforce such rules “is essential to ensure that BEAD funds go where Congress intended—to build and operate networks in hard-to-serve areas—not to prop up regulatory experiments that drive investment away,” she said.

States are complying, Roth says

Roth indicated that at least some states are complying with the NTIA’s demands. These demands also include cutting red tape related to permits and access to utility poles and increasing the amount of matching dollars that ISPs themselves put into the projects. “In the coming weeks we will announce the approval of several state plans that incorporate these commitments,” she said. “We remain on track to approve the majority of state plans and get money out the door this year.”

Before Trump won the election, the Biden administration developed rules for BEAD and approved initial funding plans submitted by every state and territory. The Trump administration’s overhaul of the program rules has delayed the funding.

While the Biden NTIA pushed states to require specific prices for low-income plans, the Biden administration prohibited states “from explicitly or implicitly setting the LCSO [low-cost service option] rate” that ISPs must offer. Instead, ISPs get to choose what counts as “low-cost.”

The Trump administration also removed a preference for fiber projects, resulting in more money going to satellite providers—though not as much as SpaceX CEO Elon Musk has demanded. The changes imposed by the Trump NTIA have caused states to allocate less funding overall, leading to an ongoing dispute over what will happen to the $42 billion program’s leftover money.

Roth said the NTIA is “considering how states can use some of the BEAD savings—what has commonly been referred to as nondeployment money—on key outcomes like permitting reform,” but added that “no final decisions have been made.”

https://arstechnica.com/tech-policy/2025/10/trump-admin-demands-states-exempt-isps-from-net-neutrality-and-price-laws/




If things in America weren’t stupid enough, Texas is suing Tylenol maker

While the underlying cause or causes of autism spectrum disorder remain elusive and appear likely to be a complex interplay of genetic and environmental factors, President Trump and his anti-vaccine health secretary Robert F. Kennedy Jr.—neither of whom have any scientific or medical background whatsoever—have decided to pin the blame on Tylenol, a common pain reliever and fever reducer that has no proven link to autism.

And now, Texas Attorney General Ken Paxton is suing the maker of Tylenol, Kenvue and Johnson & Johnson, who previously sold Tylenol, claiming that they have been “deceptively marketing Tylenol” knowing that it “leads to a significantly increased risk of autism and other disorders.”

To back that claim, Paxton relies on the “considerable body of evidence… recently highlighted by the Trump Administration.”

Of course, there is no “considerable” evidence for this claim, only tenuous associations and conflicting studies. Trump and Kennedy’s justification for blaming Tylenol was revealed in a rambling, incoherent press conference last month, in which Trump spoke of a “rumor” about Tylenol and his “opinion” on the matter. Still, he firmly warned against its use, saying well over a dozen times: “don’t take Tylenol.”

“Don’t take Tylenol. There’s no downside. Don’t take it. You’ll be uncomfortable. It won’t be as easy maybe, but don’t take it if you’re pregnant. Don’t take Tylenol and don’t give it to the baby after the baby is born,” he said.

“Scientifically unfounded”

As Ars has reported previously, there are some studies that have found an association between use of Tylenol (aka acetaminophen or paracetamol) and a higher risk of autism. But, many of the studies finding such an association have significant flaws. Other studies have found no link. That includes a highly regarded Swedish study that compared autism risk among siblings with different acetaminophen exposures during pregnancy, but otherwise similar genetic and environmental risks. Acetaminophen didn’t make a difference, suggesting other genetic and/or environmental factors might explain any associations. Further, even if there is a real association (aka a correlation) between acetaminophen use and autism risk, that does not mean the pain reliever is the cause of autism.

https://arstechnica.com/health/2025/10/if-things-in-america-werent-stupid-enough-texas-is-suing-tylenol-maker/




Senators move to keep Big Tech’s creepy companion bots away from kids

Big Tech says bans aren’t the answer

As the bill advances, it could change, senators and parents acknowledged at the press conference. It will likely face backlash from privacy advocates who have raised concerns that widely collecting personal data for age verification puts sensitive information at risk of a data breach or other misuse.

The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a “heavy-handed approach” to child safety. The group’s vice president of US policy and government relations, K.J. Bagchi, said that “we all want to keep kids safe, but the answer is balance, not bans.

“It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise,” Bagchi said.

However, several organizations dedicated to child safety online, including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators’ announcement Tuesday. The GUARD Act, these groups told Time, is just “one part of a national movement to protect children and teens from the dangers of companion chatbots.”

Mourning parents are rallying behind that movement. Earlier this month, Garcia praised California for “finally” passing the first state law requiring companies to protect their users who express suicidal ideations to chatbots.

“American families, like mine, are in a battle for the online safety of our children,” Garcia said at that time.

During Tuesday’s press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.

https://arstechnica.com/tech-policy/2025/10/senators-move-to-keep-big-techs-creepy-companion-bots-away-from-kids/




Python plan to boost software security foiled by Trump admin’s anti-DEI rules

“Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries,” the Python Software Foundation said.

Board voted unanimously to withdraw application

The Carpentries, which teaches computational and data science skills to researchers, said in June that it withdrew its grant proposal after “we were notified that our proposal was flagged for DEI content, namely, for ‘the retention of underrepresented students, which has a limitation or preference in outreach, recruitment, participation that is not aligned to NSF priorities.’” The Carpentries was also concerned about the National Science Foundation rule against grant recipients advancing or promoting DEI in “any” program, a change that took effect in May.

“These new requirements mean that, in order to accept NSF funds, we would need to agree to discontinue all DEI focused programming, even if those activities are not carried out with NSF funds,” The Carpentries’ announcement in June said, explaining the decision to rescind the proposal.

The Python Software Foundation similarly decided that it “can’t agree to a statement that we won’t operate any programs that ‘advance or promote’ diversity, equity, and inclusion, as it would be a betrayal of our mission and our community,” it said yesterday. The foundation board “voted unanimously to withdraw” the application.

The Python foundation said it is disappointed because the project would have offered “invaluable advances to the Python and greater open source community, protecting millions of PyPI users from attempted supply-chain attacks.” The plan was to “create new tools for automated proactive review of all packages uploaded to PyPI, rather than the current process of reactive-only review. These novel tools would rely on capability analysis, designed based on a dataset of known malware. Beyond just protecting PyPI users, the outputs of this work could be transferable for all open source software package registries, such as NPM and Crates.io, improving security across multiple open source ecosystems.”

The foundation is still hoping to do that work and ended its blog post with a call for donations from individuals and companies that use Python.

https://arstechnica.com/tech-policy/2025/10/python-foundation-rejects-1-5-million-grant-over-trump-admins-anti-dei-rules/




Australia’s social media ban is “problematic,” but platforms will comply anyway

Social media platforms have agreed to comply with Australia’s social media ban for users under 16 years old, begrudgingly embracing the world’s most restrictive online child safety law.

On Tuesday, Meta, Snap, and TikTok confirmed to Australia’s parliament that they’ll start removing and deactivating more than a million underage accounts when the law’s enforcement begins on December 10, Reuters reported.

Firms risk fines of up to $32.5 million for failing to block underage users.

Age checks are expected to be spotty, however, and Australia is still “scrambling” to figure out “key issues around enforcement,” including detailing firms’ precise obligations, AFP reported.

An FAQ managed by Australia’s eSafety regulator noted that platforms will be expected to find the accounts of all users under 16.

Those users must be allowed to download their data easily before their account is removed.

Some platforms can otherwise allow users to simply deactivate and retain their data until they reach age 17. Meta and TikTok expect to go that route, but Australia’s regulator warned that “users should not rely on platforms to provide this option.”

Additionally, platforms must prepare to catch kids who skirt age gates, the regulator said, and must block anyone under 16 from opening a new account. Beyond that, they’re expected to prevent “workarounds” to “bypass restrictions,” such as kids using AI to fake IDs, deepfakes to trick face scans, or the use of virtual private networks (VPNs) to alter their location to basically anywhere else in the world with less restrictive child safety policies.

Kids discovered inappropriately accessing social media should be easy to report, too, Australia’s regulator said.

https://arstechnica.com/tech-policy/2025/10/social-media-firms-abandon-fight-against-australia-law-banning-under-16-users/




AT&T ad congratulating itself for its ethics violated an ad-industry rule

NAD: Our rulings can’t be used in ads

Violating a National Advertising Division rule isn’t the same as violating a US law. But advertisers rely extensively on the self-regulatory system to handle disputes and determine whether specific ads are misleading and should be pulled.

Companies generally abide by the self-regulatory body’s rulings. While they try to massage the truth in ways that favor their own brands, they want to have some credibility left over to bring complaints against misleading ads launched by their competitors. The self-regulatory system also may help minimize government regulation of false and misleading claims, although the NAD does sometimes refer particularly egregious cases to the Federal Trade Commission.

While the NAD routinely issues decisions that a particular ad is misleading and should be changed or removed, the public rebuke of AT&T was unusual. AT&T’s action, it said, threatens the integrity of the entire self-regulatory system.

NAD procedures state that companies participating in the system agree “not to mischaracterize any decision, abstract, or press release issued or use and/or disseminate such decision, abstract or press release for advertising and/or promotional purposes.”

The NAD said:

In direct violation of this, AT&T has run an ad and issued a press release making representations regarding the alleged results of a competitor’s participation in BBB National Program’s advertising industry self-regulatory process.

The integrity and success of the self-regulatory forum hinges on the voluntary agreement of participants in an NAD proceeding to abide by the rules set forth in the BBB National Programs’ Procedures. As a voluntary process, fair dealing on the part of the parties is essential and requires adherence to both the letter and the spirit of the process.

AT&T’s violation of its agreement under the Procedures and its misuse of NAD’s decisions for promotional purposes undermines NAD’s mission to promote truth and accuracy of advertising claims and foster consumer trust in the marketplace.

AT&T omits its own history of misleading ads

The NAD did not say how it will move forward if AT&T refuses to pull the ads. We contacted the NAD and will update this article if it provides a response.

https://arstechnica.com/tech-policy/2025/10/att-ad-congratulating-itself-for-its-ethics-violated-an-ad-industry-rule/




10M people watched a YouTuber shim a lock; the lock company sued him. Bad idea.

McNally’s lawyer laid into this seal request, pointing out that the company had shown no concern over these issues until it lost its bid for a preliminary injunction. Indeed, “Proven boasted to its social media followers about how it sued McNally and about how confident it was that it would prevail. Proven even encouraged people to search for the lawsuit.” Now, however, the company “suddenly discover[ed] a need for secrecy.”

The judge has not yet ruled on the request to seal.

Another way

The strange thing about the whole situation is that Proven actually knew how to respond constructively to the first McNally video. Its own response video opened with a bit of humor (the presenter drinks a can of Liquid Death), acknowledged the issue (“we’ve had a little bit of controversy in the last couple days”), and made clear that Proven could handle criticism (“we aren’t afraid of a little bit of feedback”).

The video went on to show how their locks work and provided some context on shimming attacks and their likelihood of real-world use. It ended by showing how users concerned about shimming attacks could choose more expensive but more secure lock cores that should resist the technique.

Quick, professional, non-defensive—a great way to handle controversy.

But it was all blown apart by the company’s angry social media statements, which were unprofessional and defensive, and the litigation, which was spectacularly ill-conceived as a matter of both law and policy. In the end, the case became a classic example of the Streisand Effect, in which the attempt to censor information can instead call attention to it.

Judging from the number of times the lawsuit talks about 1) ridicule and 2) harassment, it seems like the case quickly became a personal one for Proven’s owner and employees, who felt either mocked or threatened. That’s understandable, but being mocked is not illegal and should never have led to a lawsuit or a copyright claim. As for online harassment, it remains a serious and unresolved issue, but launching a personal vendetta—and on pretty flimsy legal grounds—against McNally himself was patently unwise. (Doubly so given that McNally had a huge following and had already responded to DMCA takedowns by creating further videos on the subject; this wasn’t someone who would simply be intimidated by a lawsuit.)

In the end, Proven’s lawsuit likely cost the company serious time and cash—and generated little but bad publicity.

https://arstechnica.com/tech-policy/2025/10/suing-a-popular-youtuber-who-shimmed-a-130-lock-what-could-possibly-go-wrong/