![]() ![]() On Switch this was only used for battery life, but they didn't HAVE to limit it to that. DS, 3DS and Switch all had processor revisions with moderately improved capabilities. Honestly I don't think a "New Nintendo Switch 2" type mid gen refresh is off the table, or particularly unlikely. That buys the GPU industry another decade of growth, just as unified shaders did from the Xbox 360 in 2005 up to the first RTX cards in 2018.Īt least, that's the idea. That drives innovation which improves perf without needing a generational node shrink every time. RT and AI are still new enough tech that unlike the old raster pipeline, there are plenty of efficiencies and innovations to be discovered as more and more games adopt it. Legacy games run on a legacy pipeline that isn't getting faster, but it's locked at something like a 4090's worth of performance which is hardly a problem. DLSS is fast enough that it can fill in every blank at 60 fps, and frame-gen to 120.Īll new games are fully path traced. RT can draw enough pixels on even complex games for DLSS to create good images. If Nvidia can find a way to let tensor cores "eat" the old CUDA cores in a way that is more space efficient than having them separate, Nvidia can grow tensor performance and RT performance faster than node shrinks can grow the number of transistors. You only get there by rethinking rendering. But the existing raster pipeline is really really efficient. If you want to keep seeing visual leaps, you need to find ways to get more power out of the same number of transistors. Node shrinks get harder, you can't get more and more transistors per dollar anymore. DLSS 3.5 does it with points of light between points of light. DLSS 3 did it with frames between frames. DLSS 2 did that with pixels between pixels. But it's prohibitively expensive, especially if you want to avoid highly noisy images.Īlong comes DLSS which is really good at taking noisy data/insufficient data and turning it into complete data. You can completely replace classic rendering with ray-tracing, drawing every pixel on screen entirely with RT. Nvidia's current goal seems to be to eliminate rasterization/classic rendering, replacing it with some combination of path/ray tracing and neural rendering. After all, you're still going to want to run games that aren't fully neural rendered. Click to expand.Or a new kind of core that combines tensor and non-tensor functionality. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |