Google’s Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how deep links appear in your snippets, how your robots.txt gets parsed, how agentic features work in Search, and how the EU’s data-sharing rules apply to AI chatbots.

Here’s what matters for you and your work.

Google Lists Best Practices For Read More Deep Links

Google updated its snippet documentation with a new section on “Read more” deep links in Search results. The documentation lists three best practices that can increase the likelihood of these links appearing.

Key facts: Content must be immediately visible to a human on page load, and content hidden behind expandable sections or tabbed interfaces can reduce the likelihood of these links appearing. Sections should use H2 or H3 headings. The snippet text needs to match the content that appears on the page, and pages with content loaded after scrolling or interaction may further reduce the likelihood.

Why This Matters

The three practices are the first specific guidance Google has published on this feature. Sites using expandable FAQ sections, tabbed product detail areas, or scroll-triggered content for core information may see fewer deep links in their snippets compared with sites that render the same content on page load.

The guidance matches a pattern Google has applied to other Search features. Content that renders without user interaction is more likely to appear in enhanced display.

Slobodan Manić, founder of No Hacks, made a related observation on LinkedIn:

“The documentation is framed around one snippet behavior (read more deep links in search results), but the language Google chose reads as a general preference. ‘Content immediately visible to a human’ is the structural instruction, not a read-more-specific tip.”

Manić’s point extends his April 16 IMHO interview with Managing Editor Shelley Walsh, where he argued that most websites are structurally broken for AI agents. He argues that search crawlers and AI agents now face the same structural problem, and the audit is the same for both.

For existing pages, the audit question is whether key information is contained within a click-to-expand element. If a page already has a “Read more” deep link for one section, that section’s structure serves as a guide to what works. For other sections on the same page, replicating that structure may also improve their chances.

Google describes the guidance as best practices that can “increase the likelihood” of deep links appearing. That hedging matters because this is not a list of requirements, and following all three may not guarantee the links appear.

Read our full coverage: Google Lists Best Practices For Read More Deep Links

Google May Expand Its Robots.txt Unsupported Rules List

Google may add rules to its robots.txt documentation based on analysis of real-world data collected through HTTP Archive. Gary Illyes and Martin Splitt described the project on the latest Search Off the Record podcast.

Key facts: Google’s team analyzed the most frequently unsupported rules in robots.txt files across millions of URLs indexed by the HTTP Archive. Illyes said the team plans to document the top 10 to 15 most-used unsupported rules beyond user-agent, allow, disallow, and sitemap. He also said the parser may expand the typos it accepts for disallow, though he did not commit to a timeline or name specific typos.

Why This Matters

If Google documents more unsupported directives, sites using custom or third-party rules will have clearer guidance on what Google ignores.

Anyone maintaining a robots.txt file with rules beyond user-agent, allow, disallow, and sitemap should audit for directives that have never worked for Google. The HTTP Archive data is publicly queryable on BigQuery, so the same distribution Google used is available to anyone who wants to examine it.

The typo tolerance is the more speculative part. Illyes’ phrasing implies that the parser already accepts some misspellings of “disallow,” and more may be honored over time. Audit any spelling variants now and correct them, rather than assuming they will be ignored.

Read our full coverage: Google May Expand Unsupported Robots.txt Rules List

EU Proposes Google Share Search Data With Rivals And AI Chatbots

The European Commission sent preliminary findings proposing that Google share search data with rival search engines across the EU and EEA, including AI chatbots that qualify as online search engines under the DMA. The measures are not yet binding, with a public consultation open until May 1 and a final decision due by July 27.

Key facts: The proposal covers four data categories shared on fair, reasonable, and non-discriminatory terms. The categories are ranking, query, click, and view data. Eligibility extends to AI chatbot providers that meet the DMA’s definition of online search engines. If the Commission maintains eligibility through the final decision, qualifying providers could gain access to anonymized Google Search data under the Commission’s proposed terms.

Why This Matters

This proposal explicitly extends search-engine data-sharing eligibility to AI chatbots under the DMA. If the eligibility survives the consultation, the regulatory category of “search engine” now includes products that most search marketing work has treated as a separate category.

The consequences vary depending on where you operate. For sites optimizing for EU/EEA visibility, the change could broaden the scope of where anonymized search signals flow. AI products competing with Google in that market could use the data to improve their retrieval and ranking systems, which could, in turn, affect which content they cite.

Outside the EU, the direct regulatory effect is zero. The category definition is a different matter. How the Commission draws the line between “AI chatbot” and “AI chatbot that qualifies as a search engine” is likely to be cited in future proceedings.

The eligibility question is the story to watch through May 1. If the Commission narrows the AI chatbot criteria in response to consultation feedback, the implications stay regulatory. If it holds the line, that would set a material precedent for how AI search is classified.

Read our full coverage: Google May Have To Share Search Data With Rivals

Google Adds New Task-Based Search Features

Google introduced new Search features that continue its evolution toward task completion. Users can now track individual hotel price drops via a new toggle in Search, and Google is adding the ability to launch AI agents directly from AI Mode.

Key facts: Hotel price tracking is available globally through a toggle in the search bar. When prices drop for a tracked hotel, Google sends an email alert. The AI agent launched from AI Mode allows users to initiate tasks handled by AI within the search interface. Rose Yao, a Google Search product leader, posted about the features on X.

Why This Matters

Each task-based feature moves a process that previously started on another site into Google’s own surface. Hotel price tracking has existed at the city level for months. Expansion to individual hotels adds a new signal that users can set inside Google rather than on hotel or aggregator sites.

Direct-booking visibility depends on being inside Google’s ecosystem. Sites relying on price-drop alerts as a return-trigger for users may see some of that engagement reallocated to Google’s tracking UI. For hotel brands, this raises the stakes for ensuring individual hotel pages are fully populated in Google Business Profile and hotel feeds.

On LinkedIn, Daniel Foley Carter connected the feature to a broader pattern:

“Google’s AI overviews, AI mode and now in-frame functionality for SERP + SITE is just Google eating more and more into traffic opportunities. Everything Google told US not to do its doing itself. SPAM / LOW VALUE CONTENT – don’t resummarise other peoples content – Google does it.”

The AI agent launch is more speculative. Google has not published detailed documentation explaining what kinds of tasks users can delegate or how sources get cited. The feature confirms that agentic search, described by Sundar Pichai as “search as an agent manager,” is appearing incrementally in Search rather than as a single launch.

Read Roger Montti’s full coverage: Google Adds New Tasked-Based Search Features

Theme Of The Week: The Rules Are Getting Written

Each story this week spells out something that was previously implicit or underway.

Google signaled plans to expand what its robots.txt documentation covers. The company listed specific practices that can increase the likelihood of “Read more” deep links appearing. The European Commission proposed measures that extend search-engine data-sharing eligibility to AI chatbots under the DMA. And task-based features that Sundar Pichai described in interviews are rolling out as toggles in the search bar.

For your day-to-day, the ground gets firmer. Fewer questions are judgment calls. What does and doesn’t qualify, what Google supports, and what counts as a search engine to a regulator are all getting written down. That works to your advantage when it means clearer audit criteria, and against you when “we weren’t sure” is no longer a defensible answer.

Top Stories Of The Week:

More Resources:


Featured Image: [Photographer]/Shutterstock

https://www.searchenginejournal.com/seo-pulse-googles-robots-txt-docs-expand-deep-links-get-rules-eu-steps-in/572877/




Rocket Report: Artemis III rocket getting ready; SpaceX is now an AI company

But where money does that come from? … SpaceX expects more than 90 percent of that market, or $26.5 trillion, to stem from the AI sector. The vast majority of that, $22.7 trillion, could come ​from AI for businesses. The company is moving ahead with an IPO expected this summer targeting a valuation of roughly $1.75 trillion and seeking ⁠to raise about $75 billion, which would make it the largest initial public offering in history. “We believe we have identified the largest actionable total addressable market in human ​history,” according to the filing.

Falcon boosters have now landed 600 times. SpaceX completed its 600th Falcon booster landing during a Starlink mission Sunday, Spaceflight Now reports. The Starlink 17-22 mission added another 25 broadband Internet satellites into the company’s low Earth orbit constellation that consists of more than 10,200 spacecraft.

Don’t forget the hard-working ships … SpaceX used Falcon 9 first stage booster B1097, which was flying for the seventh time. It previously launched Sentinel-6B, Twilight, and five previous batches of Starlink satellites. A little more than eight minutes after liftoff, B1097 landed on the SpaceX drone ship, “Of Course I Still Love You.” It was the 191st landing on this vessel. Another droneship, “Just Read the Instructions,” will now be dedicated to supporting Starship operations.

Two steps forward, one step back for New Glenn. The third flight of Blue Origin’s heavy-lift New Glenn launcher began Sunday with the company’s first successful reflight of an orbital-class booster, but ended with a setback for Jeff Bezos’ flagship rocket, Ars reports. After the launch, the booster settled onto the ship for a smoky but on-target touchdown less than 10 minutes after liftoff. The landing marked the end of the second flight for this booster, a stunning success for the company.

Second-stage issues … But Blue Origin could not celebrate the achievement for long. Within a couple of hours, it became clear that something went wrong with the mission’s remaining milestones. Blue Origin confirmed New Glenn’s upper stage missed its aim and released its payload, a cellular broadband communications satellite for AST SpaceMobile, into an inaccurate orbit. The satellite later reentered Earth’s atmosphere. The second stage issue will force Blue Origin to stand down New Glenn at a time when NASA needs the vehicle to ramp up operations to support the Artemis Program.

https://arstechnica.com/space/2026/04/rocket-report-some-canadians-dont-want-a-spaceport-falcon-hits-600-landings/




Google Won’t Act On Spam Reports If They Contain Personal Information via @sejournal, @martinibuster

Google updated their spam reporting documentation to make it clearer that spam reports are not wholly confidential and that it’s possible for personal identifiable information to be shared with the sites receiving a manual action.

Change In Response To Feedback

Google’s changelog noted that they were updating the spam reporting form based on feedback they’d received about personal information contained in the spam report that is shared with spammy sites that receive a manual action (formerly known as a penalty).

The update contains a new notice that spam reports containing personal information will not be processed.

The changelog noted:

“Clarifying when and why we may take manual action based on spam reports
What: Further clarified when and why we may take manual action based on spam reports.
Why: To address feedback we received about the change on using spam reports to take manual action.”

Google removed the following from their documentation:

“If we issue a manual action, we send whatever you write in the submission report verbatim to the site owner to help them understand the context of the manual action. We don’t include any other identifying information when we notify the site owner; as long as you avoid including personal information in the open text field, the report remains anonymous.”

The above wording was replaced with the following:

“Don’t include any personally identifying information in your submission. To comply with regulations, we must send the submission text to the site owner to help them understand the context of a manual action, if one is issued.

Because of this, we won’t process your submission if we determine it contains personally identifying information to protect privacy. Not including such information fully ensures your information is safe and prevents your submission from being discarded.”

Action Moving Forward

On the one hand it’s good that Google won’t proceed with a manual action if the report contains personal information. This means that if you’re submitting spam reports to Google, don’t name your site, business name, personal name or anything else that you don’t want the affected spammer to know.

Read the updated documentation here:

Report spam, phishing, or malware

Learn more about Google’s spam reporting tool: Google Just Made It Easy For SEOs To Kick Out Spammy Sites

Featured Image by Shutterstock/andre_dechapelle

https://www.searchenginejournal.com/google-wont-act-on-spam-reports-if-they-contain-personal-information/572929/




Visitors to this private space station won’t be wearing shorts and T-shirts

Specifically suited

The Vast Astronaut Flight Suit was developed with the company’s clients in mind, from its fit to its features.

Worn as either a one- or two-piece garment by zipping (or unzipping) the jacket from the pants, the flight suit will be tailored to each crew member while also offering increased comfort and mobility through back vents and shoulder gussets. The suit also has pockets and hook-and-loop fasteners (Velcro) so tools can be easily stowed and retrieved.

With utility in mind, Vast sought to create a highly functional flight suit optimized for both training on Earth and daily use aboard Haven-1 in orbit.

Credit: Vast

With utility in mind, Vast sought to create a highly functional flight suit optimized for both training on Earth and daily use aboard Haven-1 in orbit. Credit: Vast

“In microgravity, you need your hands free and your tools always within reach,” said former NASA astronaut Megan McArthur, who is also advising Vast. “You’re constantly moving through small spaces and positioning your body in ways we don’t experience on Earth.”

Despite its clean white color and uniform design, the suit also provides for points of personal customization. Each crew’s suits will sport their own mission patch, and it has a place for each crew member’s flight badge, “wings” that they will individually earn from Vast “by launching, living on orbit and performing mission operations in space,” according to the company.

Separate from the flight suit but along the same lines, each Vast crew member will also wear the Pilot’s Venturer Vertical Drive, a timepiece designed by the Swiss luxury watchmaker IWC Schaffhausen and tested in partnership with Vast. IWC engineered the watch to meet the challenges presented during human spaceflight, including replacing the crown with a more glove-friendly rotating bezel. Vast ensured the watch could withstand vibrations and pressure changes and be compatible with the Haven-1 on-board environment.

a black face and white strap on a wristwatch is seen floating above Earth in this rendering

IWC Schaffhausen partnered with Vast to certify its Pilot’s Venturer Vertical Drive, a wristwatch designed for space.

Credit: IWC Schaffhausen

IWC Schaffhausen partnered with Vast to certify its Pilot’s Venturer Vertical Drive, a wristwatch designed for space. Credit: IWC Schaffhausen

(IWC Schaffhausen is offering the Pilot’s Venturer Vertical Drive to anyone for $28,200.)

“It’s something astronauts can actually use,” said Feustel. “This is the flight suit for the commercial, crewed spaceflight era, and it’s really just the beginning.”

https://arstechnica.com/space/2026/04/vast-reveals-flight-suit-tests-timepiece-for-commercial-space-station/




US accuses China of “industrial-scale” AI theft. China says it’s “slander.”

Specifically, the committee recommended that the State Department assess whether the distillation attacks violate laws like the Economic Espionage Act and the Computer Fraud and Abuse Act. They also want “adversarial distillation” clearly defined and officially categorized as a controlled technology transfer, which would make it easier to restrict fraudulent Chinese access to models.

If such steps were taken, the US could prosecute bad actors and impose heavy financial penalties that might dissuade Chinese firms from treating “serious violations as a tolerable cost of doing business,” the committee’s report said.

China slams accusations as “pure slander”

Kratsios’ memo threatening a crackdown comes ahead of Donald Trump’s highly anticipated meeting with China’s president Xi Jinping next month.

Trump has claimed that the meeting will be “special” and “much will be accomplished.” However, at least one analyst told the South China Morning Post that the war in Iran means that Trump has “lost almost all his bargaining chips” at a time when the US and China are seeking to stabilize a trade relationship that has been tense since Trump took office.

China seems unlikely to tolerate Kratsios’ allegations. Liu Pengyu, a spokesperson for the Chinese embassy in Washington, DC, told FT that the White House accusations were “pure slander.”

“China has always been committed to promoting scientific and technological progress through cooperation and healthy competition,” Pengyu said. “China attaches great importance to the protection of intellectual property rights.”

Whether Trump will side with AI firms that want to see China cut off from their models and sanctioned for distillation attacks has yet to be seen. Trump has, in the past, been accused of making big concessions to China on export control matters that experts have claimed threaten US national security and the economy, as US firms claim the distillation attacks do.

Some of Trump’s concessions may need to be reversed to fight the alleged “industrial espionage.”

Chris McGuire, a technology security expert at the Council on Foreign Relations, told FT that “Chinese AI firms are relying on distillation attacks to offset deficits in AI computing power and illicitly reproduce the core capabilities of US models.” To stop them, the US may need to tighten export controls that Trump loosened, such as allowing Nvidia chip sales to China so long as the US gets a 25 percent cut. That bizarre deal made “no sense” to experts who warned that Trump’s odd move could have opened the door for China to demand access to America’s most advanced AI chips.

https://arstechnica.com/tech-policy/2026/04/us-accuses-china-of-industrial-scale-ai-theft-china-says-its-slander/




Carbon nanotube wiring gets closer to competing with copper

Shortly after their discovery, carbon nanotubes seemed to be a material wonder. There were metallic and semiconducting forms; they were tiny and incredibly light; and they could only be broken by tearing apart chemical bonds. The ideas for using them seemed endless.

But then the reality of working with them set in. It was hard to get a pure population of metallic or semiconducting forms. Synthesis techniques tended to produce a tangle of mostly short nanotubes; those that extended for more than a couple of centimeters remain rare. And while the metallic version offered little resistance to carrying electric current, it was hard to send many electrons down the nanotube.

Materials scientists, however, are a stubborn bunch, and they’re still trying to get them to work. Today’s issue of Science includes a paper describing the addition of a chemical to carbon nanotube bundles to boost their ability to carry current to levels closer to those of copper. While the more conductive nanotubes weren’t stable, the discovery may point the way toward something with a longer shelf life.

Doped nanotubes

Carbon nanotubes come in various forms. In the case of single-walled nanotubes, you can think of them as taking a sheet of graphene, rolling it up into a circle, and linking together the two opposite ends you just brought together. These can also be different diameters. There are also multi-walled carbon nanotubes, where a second nanotube (and maybe third, and maybe more beyond that) is wrapped around the first.

When metallic, these offer little resistance to electron flow along the nanotube. But, because most of their electrons are tied up in the chemical bonding needed to form the nanotube, there’s not a lot of them available to carry current. So, a lot of people have tried developing dopants—chemicals that can be mixed in small quantities that change the behavior of the bulk material. In this case, the goal was to find chemicals that would act as electron donors, adding to the amount of current that could potentially be sent down the nanotube.

https://arstechnica.com/science/2026/04/researchers-get-carbon-nanotube-wiring-to-conduct-more-like-copper/




We still don’t have a more precise value for “Big G”

[embedded content]

The gravitational constant, affectionally known as “Big G,” is one of the most fundamental constants of our universe. Its value describes the strength of the gravitational force acting on two masses separated by a given distance—or if you want to be relativistic about it, the amount a given mass curves space-time. Physicists have a solid ballpark figure for the value of Big G, but they’ve been trying to measure it ever more precisely for more than two centuries, each effort yielding slightly different values. And we do mean slight: The values vary by roughly one part in 10,000.

Still, other fundamental constants are known much more precisely. So Big G is the black sheep of the family and a point of frustration for physicists keen on precision metrology. The problem is that gravity is so weak, by far the weakest of the four fundamental forces, so there is significant background noise from the gravitational field of the Earth (aka “little g”). That weakness is even more pronounced in a laboratory.

In the latest effort to resolve the issue, scientists at the National Institute of Standards and Technology (NIST) spent the last decade replicating one of the most divergent recent experimental results. The group just announced their results in a paper published in the journal Metrologia. It does not resolve the discrepancy, but it gives physicists one more data point in their ongoing quest to nail down a more precise value for Big G.

Isaac Newton introduced the concept of a gravitational constant when he published his law of universal gravitation in the late 17th century, although it didn’t get its Big G notation until the 1890s. Newton thought it might be possible to measure the strength of gravity by swinging a pendulum near a large hill and measuring the deflection, but he never attempted the experiment, reasoning that the effect would be too small to measure. By 1774, the Royal Society had established a committee to determine the density of the Earth as an indirect measurement of Big G, using a variation of Newton’s pendulum concept.

https://arstechnica.com/science/2026/04/we-still-dont-have-a-more-precise-value-for-big-g/




In a first, a ransomware family is confirmed to be quantum-safe

There is no practical benefit for Kyber developers to have chosen a PQC key-exchange algorithm. The Kyber ransom note gives victims one week to respond. Quantum computers capable of running Shor’s algorithm—the series of mathematical equations that allow the breakage of RSA and ECC (elliptic curve cryptography)—are, at a minimum, three years away and likely much further.

A Kyber variant that targets systems running VMware,  meanwhile, claims to use ML-KEM as well. Rapid7 said its look under the hood revealed that, in fact, it uses RSA with 4096-bit keys, a strength that will take even longer for Shor’s algorithm to break. Anna Širokova, a Rapid7 senior security researcher and the author of Tuesday’s post, said the use or claimed use of ML-KEM is likely just a branding gimmick and that implementing it required relatively little work by Kyber developers.

In an email, Širokova wrote:

First, it’s marketing to the victim. “Post-quantum encryption” sounds a lot scarier than “we used AES,” especially to non-technical decision-makers who might be evaluating whether to pay. It’s a psychological trick. They’re not worried about someone breaking the encryption a decade from now. They want payment within 72 hours.

Second, implementation cost is low. Kyber1024 libraries (renamed to ML-KEM) are available and well-documented. Ransomware doesn’t encrypt your files directly with Kyber1024. That would be slow. Instead, it:

  1. Generates a random AES key
  2. Encrypts your files with that AES key (fast)
  3. Encrypts that AES key with Kyber1024 (so only the attacker can decrypt it)

In Rust, there are already libraries that do Kyber1024. The developer just adds it to their dependencies and calls a function to wrap the key.

Despite the hype, Kyber suggests that PQC is attracting the attention of less technically inclined attorneys and executives deciding how to respond to ransom demands. Kyber developers are hoping the impression that the encryption has overwhelming strength will sway people to pay.

https://arstechnica.com/security/2026/04/now-even-ransomware-is-using-post-quantum-cryptography/




RFK Jr.’s rejection of germ theory debunked in Senate hearing

In a Congressional hearing on Wednesday, Sen. Bernie Sanders (I-Vt.) directly confronted anti-vaccine Health Secretary Robert F. Kennedy Jr. on his rejection of germ theory—the unquestionable scientific idea that specific pathogenic microbes cause specific diseases. After Kennedy defended his fringe view, Senator Bill Cassidy fact-checked and debunked Kennedy’s denialist arguments in real time.

The exchanges mark a rare instance in which Kennedy’s dismissal of germ theory has been raised in such a high-profile public setting, in this case, a hearing of the Senate Committee on Health, Education, Labor, and Pensions. Kennedy, who has no background in science, medicine, or public health, is well known as an ardent anti-vaccine activist and peddler of conspiracy theories. But his startling rejection of a cornerstone theory in biomedical science has mostly been underreported.

As Ars Technica reported last year, Kennedy wrote about his germ theory denialism explicitly in his 2021 book The Real Anthony Fauci. In it, Kennedy maligns germ theory as a tool of pharmaceutical companies, scientists, and doctors to promote the use of modern medicines. Instead of accepting germ theory, Kennedy promotes a concept akin to the discarded terrain theory, in which diseases stem not from germs, but from imbalances in the body’s inner “terrain.” Those imbalances are claimed to be caused by poor nutrition and exposure to environmental toxins and stressors. (In his book, Kennedy erroneously labels this as “miasma theory,” but that is a different theory that suggests diseases derive from breathing bad air, vapors, or mists from decaying or corrupting matter. The idea was supplanted by germ theory, while terrain theory was never widely accepted.)

Kennedy’s embrace of terrain theory over germ theory is foundational to the priorities of his Make America Healthy Again (MAHA) movement, which promotes notions of healthy diets and lifestyles and clean living. As health secretary, Kennedy has focused on revamping federal dietary guidance, focusing on whole foods (and concerning amounts of saturated fat) while vilifying artificial ingredients and additives. He has regularly posted videos of himself working out on social media. And, with his previous career as an environmental lawyer, he has a long history of fighting against environmental contamination. Kennedy and his MAHA movement have strongly lobbied against chemical pollutants and pesticides, particularly glyphosate (though Kennedy notably shifted on this issue recently, and now supports increasing production of the weed killer, in line with Trump’s policies).

https://arstechnica.com/health/2026/04/rfk-jr-s-rejection-of-germ-theory-debunked-in-senate-hearing/




Why are the Mac mini and Mac Studio gradually becoming impossible to buy?

It’s a good time to be in the market for a MacBook, between the affordability of the MacBook Neo, the power of the M5 Pro and M5 Max MacBook Pros, and the all-around appeal of the M5 MacBook Air. But Apple’s desktop computers are another story, and not just because they’re all about due for their own M5 upgrades.

Over the last few months, the Mac mini and the Mac Studio have gradually become harder to buy. The 512GB M3 Ultra Mac Studio was removed from Apple’s website, and other models of both desktops have seen their ship times slip from days to weeks to months. In the last couple of weeks, several other configurations of Mac mini and Studio have begun showing up as “currently unavailable” on Apple’s website, which virtually never happens even when Apple is planning an imminent hardware refresh.

This week (as spotted by MacRumors), the baseline $599 M4 Mac mini, which offers 16GB of RAM and 256GB of storage, earned the “currently unavailable” label for the first time.

You can still place orders for most Mac mini models. An M4 Mac mini with 512GB or more of storage and either 16 or 24GB of RAM will take between 5 and 12 weeks to arrive, depending on the specific configuration you buy. M4 Pro Mac minis with any storage configuration and either 24GB or 48GB of RAM will take a similar amount of time to arrive, with most models showing availability within 10 to 12 weeks.

All M4 Mac minis with 256GB of storage, all M4 minis with 32GB of RAM, and all M4 Pro Mac minis with 64GB of RAM are listed as “currently unavailable.” Mac Studio models with 128GB or 256GB of RAM are also listed as “currently unavailable.” Other Studio configurations list the same five- to 12-week wait times as the minis.

This does not seem to be an issue specific to the M4 chip generation; most M4 iMac configurations, including those with 32GB of RAM, will arrive at your door within a week or two of being ordered. It’s also not being caused exclusively by ongoing RAM and shortage storages—new MacBook Pros with 128GB of RAM and large SSDs will arrive within two or three weeks of being ordered.

https://arstechnica.com/gadgets/2026/04/apples-m4-mac-mini-including-the-599-one-is-gradually-becoming-impossible-to-buy/