![]() You don’t need any of those things to play games-or even to transcode video on a GPU. We’re curious to see how good a graphics chip this generation of Nvidia’s technology could make when it’s stripped of all the extra fat needed to serve other markets: the extensive double-precision support, ECC, fairly large caches, and perhaps two or three of its raster units. If you’ll indulge me, I’ll quote myself here: We wondered at that time, several months ago, whether a leaner version of the Fermi architecture might not be a tougher competitor. ![]() The GF100, though, has a lot of extra fat in it that’s unnecessary for, well, video cards. Eventually, as you know, the GF100 arrived in the GeForce GTX 470 and 480 graphics cards, which turned out to be reasonably solid but not much faster than the then-six-month-old Radeon HD 5870-which is based on a much smaller, cheaper-to-produce chip. Rumors flew about a classic set of problems: manufacturing issues, silicon re-spins, and difficult trade-offs between power consumption and performance. Time passed, and the first Fermi-based chip, the GF100, became bogged down with delays. Naturally, that rich feature set made for a large and complex GPU, and such things can be deadly in the chip business-especially when a transition to a new architecture is mated with an immature chip fabrication process, as was the case here. Not only that, but Fermi was to be unprecedentedly capable in both domains, with a novel and robust programming model for GPU computing and a first-of-its-kind parallel architecture for geometry processing in graphics. Nvidia wanted its new top-of-the-line GPU to serve multiple markets, both traditional high-end graphics cards and the nascent market for GPUs as parallel computing engines. Fermi’s story has been one of the more intriguing developments over that span of time, because it involves great ambitions and the strains that go with attempting to achieve them. We’ve been following the story of the Fermi architecture for the better part a year now, since Nvidia first tipped its hand about plans for a new generation of DirectX 11-class GPUs.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |