Earlier this week, Bloomberg Japan’s report on a rumored Nintendo Switch “Pro” version exploded with a heavy-duty allegation: all those rumors about a “4K” Switch might indeed be true after all. The latest report on Tuesday teased a vague bump in specs like clock speed and memory, which could make the Switch run better… but jumping all the way to 4K resolution would need a massive bump from the 2016 system’s current specs.
What made the report so interesting was that it had a technical answer to that seemingly impossible rendering challenge. Nvidia, Nintendo’s exclusive SoC provider for existing Switch models, will remain on board for this refreshed model, Bloomberg said, and that contribution will include the tantalizing, Nvidia-exclusive “upscaling” technology known as Deep Learning Super Sampling (DLSS).
Since that report went live, I’ve done some thinking, and I can’t shake a certain feeling. Nvidia has a much bigger plan for the future of average users’ computing than they’ve publicly let on.
Making smoother gaming—arguably easier than Tom Cruise’s face
DLSS, for the uninitiated, is a long-in-development system that relies on thousands of hours of existing video footage, largely taken from 3D games, that’s been run through a machine-learning model. When you hear about “AI-trained” systems that, say, play video games or generate wacky names, they’re usually working in the same way in order to reliably detect successful patterns. They study and duplicate existing data (dictionaries, Twitter conversations, Tom Cruise’s face, videos of StarCraft competitions), then they go through double-, triple-, and sextuple-checking runs to confirm how well a computer can organically recreate that data before loosing their systems into real-world tests.
But those tests are usually run with a supercomputer managing those troves of data. This is where DLSS pivots sharply. As a consumer-grade offering, DLSS requires Nvidia’s “RTX” graphics cards that (theoretically) cost as little as $329—and nothing else. No extra CPU, RAM, or other peripherals required. The idea being, RTX cards come with a slab of “tensor cores” baked onto the silicon, and these are dedicated to the mathematical grunt work of tasks like image interpretation and translation.
This works in part because DLSS is focused on a relatively nonvolatile prediction scenario: what a lower-resolution image would look like if it had more pixel depth. The shape of diagonal lines, the leaves on swaying tree branches, and even the letters and words on a street sign are all arguably easier to predict—at least when fueled by a lower-pixel base—than a StarCraft player’s reaction to a Zerg rush. While DLSS’s initial launch left a few things to be desired, the system has matured to a point where moving 3D images with DLSS processing generally looks better than their higher-resolution, temporal anti-aliasing (TAA) counterparts.
Side-by-side comparisons of lower-res DLSS and higher-res TAA show strengths and weaknesses, but as of 2021, it’s become a wash. That wash has substantially higher frame rates and lower processing requirements for the DLSS side, since the traditional GPU grunt work is not only pixel-bound but also texture-bound. DLSS’s savvy on upscaling doesn’t just reduce needs for clock speed and processing cores; it also translates lower-resolution textures with more fidelity, so you can arguably hop, skip, and jump past high VRAM requirements.
Calling BS on another BS Satellaview
Nintendo Switch doesn’t have exclusive license on lower amounts of VRAM, lower clock speeds, and rendering scenarios with limited die sizes and thermal envelopes to work with. Where else might those all be useful? Hmm.
The Nintendo Switch is a genius platform as a starting point, especially since we’ve said the same thing time and time again: “%GAMENAME% is amazing to play in Switch’s portable mode, where it gets close to 720p resolution, but blowing the same action up on your 1080p TV looks like trash.” You, the savvy gamer, shouldn’t seriously consider using your Switch for TV play, in spite of stats showing tons of people craving a good TV-gaming experience (and an ever-expanding market in which more people than ever are rushing to buy Xboxes, PlayStations, and Switches for their TVs).
We’re 22 years past Nintendo’s last stab at that concept.
A DLSS-equipped “Switch Pro” (not a final name), as described by Bloomberg, would reach resolutions of up to 4K when docked and connected to a television. Assuming that DLSS in this scenario requires dedicated tensor cores, we’re currently left wondering where Nvidia will slap said cores. Are we looking at a “split-motherboard” scenario, as we’ve seen on both Xbox Series X and PlayStation 5, where the TV-plugged dock takes on certain heat-generating elements like a block of tensor cores? Or will the Switch Pro’s base hardware include tensor cores on its own SoC?
My completely speculative guess is the latter for a couple of reasons. First, the rumor of higher CPU clocks and more RAM on the base unit suggests that Nintendo plans to roll out an entirely new system, as opposed to add-on hardware that would boost the base. Nintendo has experience shipping add-on hardware for its home consoles in Japan—like the Famicom Disk System (which added more power and extra sound channels), the BS Satellaview (which added RAM), and the 64DD (which added a co-processor).
But we’re 22 years past Nintendo’s last stab at that concept, and after the mass confusion caused by the Wii U, I don’t see modern-day Nintendo wading into the consumer confusion of questions like, “can I put an older Switch into a Switch Pro dock for more power?”
https://arstechnica.com/?p=1752164