Early AMD ATI "R600" Specs, Benchmarks Leaked
Anh Tuan Huynh
December 30, 2006 5:27 PM
comment(s) - last by
Details of AMD's next generation Radeon hit the web
has leaked benchmarks and specifications of AMD’s upcoming ATI
graphics processor. The upcoming graphics processor is expected to launch in January 2007 with an expected revision arriving in March 2007. These early specifications and launch dates line up with what
has already published and are present on ATI internal roadmaps as of workweek 49.
Preliminary specifications from
of the ATI
are as follows:
64 4-Way SIMD Unified Shaders, 128 Shader Operations/Cycle
32 TMUs, 16 ROPs
512 bit Memory Controller, full 32 bit per chip connection
GDDR3 at 900 MHz clock speed (January)
GDDR4 at 1.1 GHz clock speed (March, revised edition)
Total bandwidth 115 GB/s on GDDR3
Total bandwidth 140 GB/s on GDDR4
Consumer memory support 1024 MB
DX10 full compatibility with draft DX10.1 vendor-specific cap removal (unified programming)
32FP [sic] internal processing
Hardware support for GPU clustering (any x^2 [sic] number, not limited to Dual or Quad-GPU)
Hardware DVI-HDCP support (High Definition Copy Protocol)
Hardware Quad-DVI output support (Limited to workstation editions)
230W TDP PCI-SIG compliant
This time around it appears AMD is going for a different approach by equipping the ATI
with less unified shaders than NVIDIA’s recently launched GeForce 8800 GTX. However, the unified shaders found on the ATI
can complete more shader operations per clock cycle.
ATI's interal guidance states the R600 will have 320 stream processors at launch; 64 4-way unified shaders only accounts for 256 of these stream processors.
claims AMD is expected to equip the ATI
with GDDR3 and GDDR4 memory with the GDDR3 endowed model launching in January. Memory clocks have been set at 900 MHz for GDDR3 models and 1.1 GHz for GDDR4 models. As recent as two weeks ago, ATI roadmaps had said this GDDR3 launch was canceled. These same roadmaps claim the production date for R600 is February 2007, which would be after a January 22nd launch.
Memory bandwidth of the ATI
is significantly higher than NVIDIA’s GeForce 8800-series. Total memory bandwidth varies from 115GB/s on GDDR3 equipped models to 140GB/s on GDDR4 equipped models.
Other notable hardware features include hardware support for quad DVI outputs, but utilizing all four outputs are limited to FireGL workstation edition cards.
There’s also integrated support for multi-GPU clustering technologies such as CrossFire too. The implementation on the ATI
allows any amount of
GPUs to operate together in powers of two. Expect multi-GPU configurations with greater than two GPUs to only be available for the workstation markets though.
The published results are very promising with AMD’s ATI
beating out NVIDIA’s GeForce 8800 GTX in most benchmarks. The performance delta varies from 8% up to 42% depending on the game benchmark.
contacted the site owner to get verification of the benchmarks, the owner replied that the benchmark screenshots could not be published due to origin-specific markers that would trace the card back to its source -- the author mentioned the card is part of the Microsoft Vista driver certification program.
's comments seem a little too pro-ATI, don't be too surprised. When asked if the site was affiliated in any way to ATI or AMD, the owner replied to
with the statement that "two staff members of ours are directly affiliated with AMD's business [development] division."
This article is over a month old, voting and posting comments is disabled
RE: THE GUY IS LYING...
1/6/2007 10:50:32 AM
The TNT2 core is almost identical to its predecessor the RIVA TNT, however updates included AGP 4X support, up to 32MB of VRAM, and a process shrink from 0.35 µm to 0.25 µm.
So yes I would consider it as a full Generation.
The same would be for the Geforce 256 and Geforce 2.
The Geforce 256 Was severely Memory bandwidth constrained.
And the Same for the Geforce 3 and 4. (Except for the MX cards, they were based on the Geforce 2 core).
The Geforce 2 Ultra could even out perform the early Geforce 3 Models that were released, That is until the Ti500 came about.
Oh and the Geforce 2 had Pixel shaders :) But they were fixed function, thus rarely used outside of tech demo's.
The Geforce 3 however Added Pixel shaders and vertex shaders. (1.1 I believe?) And LMA, And the last change was the Anti-Aliasing which went from supersampling to multi-sampling.
Nvidia never released a low-cost version of the Geforce 3, Geforce 2's were still selling like hotcakes.
The Geforce 4 mainly just brought Higher clock speeds, Improved Vertex and Pixel shaders, and some more re-modeling of the Anti-Aliasing.
The Geforce 5/FX was the first time Nvidia combined the 3dfx team, and there own to design the chip.
The only way Nvidia was able to remain competitive to the Radeon series, was to Optimize the drivers to the extreme, They used these "Optimizations" to reduce tri-linear quality, Shader "Hacks" and making the card not render parts of a world. The drivers looked for the software and would apply the aggressive optimizations, And people noticed that the FX picture quality was considerably inferior to the Radeon.
The FX would have been a stand-alone series, And I think everyone wishes for it to be forgotten as a mistake...
The Geforce 6 and 7 are also similar, Mainly just adjustments to the amount of pixel pipelines, and clock speeds.
The Geforce 8 however... Well cant say much the entire line-up hasn't been released, And I am sure everyone has been watching and reading it like a hawk :)
All in all, I still Like Nvidia's drivers better, they have support right from the TNT to the Geforce 8. Thats 10 Generations of 3D accelerator support right there!
And the drivers don't feel so.... clunky. Mind you they have come leaps and bounds from the Radeon 8500 Days of hell.
All in all, Nvidia have used parts of all they're graphics card line-ups. The Geforce 6 was based on the Geforce FX, but changed alot of stuff internally. When it was first released, It still used the same manufacturing process as the FX. Nvidia had alot of work to Turn the FX into what it was meant to be. The Geforce 6.
"I want people to see my movies in the best formats possible. For [Paramount] to deny people who have Blu-ray sucks!" -- Movie Director Michael Bay
"Prepare to be Punished": Microsoft is Killing OneDrive With Cuts, Blames Users
November 3, 2015, 8:23 PM
Apple's New "Magic" Peripheral Line Packs High Tech, High Prices
October 13, 2015, 9:39 PM
Samsung Adds 2 TB 850 EVO, PRO SSDs for $800, $1000
July 7, 2015, 4:23 PM
Seagate Senior Researcher: Heat Can Kill Data on Stored SSDs
May 13, 2015, 2:49 PM
How to Recover Most Apps After Your NVIDIA Driver Crashes in Windows 10
March 30, 2015, 12:54 PM
Tinkerer Gets Old School Mac Plus Running on the Modern Web
March 24, 2015, 6:41 PM
Latest Blog Posts
Sceptre Airs 27", 120 Hz. 1080p Monitor/HDTV w/ 5 ms Response Time for $220
Dec 3, 2014, 10:32 PM
Costco Gives Employees Thanksgiving Off; Wal-Mart Leads "Black Thursday" Charge
Oct 29, 2014, 9:57 PM
"Bear Selfies" Fad Could Turn Deadly, Warn Nevada Wildlife Officials
Oct 28, 2014, 12:00 PM
The Surface Mini That Was Never Released Gets "Hands On" Treatment
Sep 26, 2014, 8:22 AM
ISIS Imposes Ban on Teaching Evolution in Iraq
Sep 17, 2014, 5:22 PM
More Blog Posts
Copyright 2016 DailyTech LLC. -
Terms, Conditions & Privacy Information