There are three things that most enthusiasts have expected from NVIDIA's next-generation gaming GPU based on the Fermi architecture. There was no doubt that it would sport a large die size, run hot, and be expensive. The big question was whether it would be more powerful than anything the graphics division of AMD could muster. NVIDIA has been promising everybody since its GPU Technology conference in October that it would "blow ATI away", but we're been waiting on the hardware while ATI's GPUs dominated the holiday shopping season.
NVIDIA is saying that GF100 chips are in production, but we don't have details on yields or how many wafers are being produced at the Taiwan Semiconductor Manufacturing Company (TSMC). GF100 chips are being produced on the 40nm process, and ATI has had a hard time with the process since it first began the transition in March of last year. The GF100 has over 3 billion transistors, 50% more than the Cypress GPU which is fairly large at 334mm^2. Initial reports are that the GF100 will exceed 500mm^2, which means that there will be a lot of chips that won't be able to run at full capabilities. We can probably expect defective chips to be used in cut-down GF100 variants.
The first boards will be launched at the end of February, with first availability in March. However, volumes will be a problem, and we have been hearing concerns from some board partners that there won't be enough chips to meet demand until April, at best. The initial flagship card will launch with 512 stream processor cores (which NVIDIA is calling CUDA cores), 48 ROPs and a 384-bit bus running GDDR5.
There are sixteen Shader Multiprocessing core (SM cores) consisting of 32 Cuda cores each. Each SM core also has 16/48KB of dedicated L1 cache, four texture units, and a PolyMorph Engine. The PolyMorph Engine handles geometry on the GPU and is responsible for Vertex Fetch, Tessellation, Viewport Transform, Attribute Setup, and Stream Output functions.
One of the biggest features of DX11 is hardware tessellation, and NVIDIA is looking to beat ATI at their own game. Tessellation is one of the few features that are very visible on screen while gaming, and can have a very large visual impact.
NVIDIA has been talking about 3D gaming for a couple of years, even if no one wants to make the hardware. The company was showing off their "3D Vision Surround" concept at CES as being a marked improvement over ATI's Eyefinity multiple display technology. However, while ATI's Radeon HD 5000 series cards all come with support for three monitors on a single card, NVIDIA's version is only capable of supporting two displays on a single card. More graphics cards must be added in order to support three or more monitors.
There is one question that no one seems to be asking NVIDIA: Where are the next generation mainstream DirectX 11 graphics cards? Over 90% of graphics cards sold are priced at less than $200, and NVIDIA will have to come up with something soon if they want to stop losing mainstream market share.
quote: Power consumption and heat could also limit how many mainstream users choose Fermi as an upgrade path.
quote: Which by the way the 4870 sucks up ~150 under load
quote: Well with AMD's 5870 and 5970 hard to find for the past couple of months and the lack of dx11 games (well a lot of them anyway) there hasnt been a huge rush to AMD's side of the fence. A lot of people (like me for instance) are waiting to see what nvidia puts out and how it compares (both performance and price wise) before they plunk down their coin.
quote: Too big,Too slow,Too hot,Two months+ away.Nvidia are screwed this round. They better be developing a strong counter to Northern Islands or they are looking at a proper capitulation in the discrete graphics market.
quote: NVIDIA under ordered silicon wafers which is why their chips are in short supply not because of demand. AMD did the same speculating FERMI was going to be released around the same time ATI offered their DX11 chip and didn't want to get caught with a number of chips not selling. Luckily for them NVIDIA can't get the chip out the door.
quote: NVIDIA doesn't appear to have a Low and Mid level DX11 card with FERMI and this is where the money is. NVIDIA requires 2 graphics cards to output 3 monitors. Something the ATI cards can do out of the box. If you want to drive 3 1680x1050 monitors you get buy a single $160.00 Radeon 5770 to do this well. To drive 3 1920x1080/1920x1200 monitors you could buy a single Radeon 5870. How much will 2 Fermi's cost?
quote: quote: NVIDIA doesn't appear to have a Low and Mid level DX11 card with FERMI and this is where the money is.
quote: There is a reason they took so long to release the cards and as anand says they are clearly looking to outperform 5970.
quote: NVIDIA is clearly aiming to be faster than AMD’s Radeon HD 5870, so form your expectations accordingly.
quote: There is one question that EVERYONE seems to be asking NVIDIA.
quote: Nvidia has the most powerful GPU, but ATI has the fastest single card solution.