backtop


Print 122 comment(s) - last by VooDooAddict.. on Jan 9 at 3:21 PM

Details of AMD's next generation Radeon hit the web

Newly created site Level 505 has leaked benchmarks and specifications of AMD’s upcoming ATI R600 graphics processor. The upcoming graphics processor is expected to launch in January 2007 with an expected revision arriving in March 2007. These early specifications and launch dates line up with what DailyTech has already published and are present on ATI internal roadmaps as of workweek 49.

Preliminary specifications from Level 505 of the ATI R600 are as follows:
  • 64 4-Way SIMD Unified Shaders, 128 Shader Operations/Cycle
  • 32 TMUs, 16 ROPs
  • 512 bit Memory Controller, full 32 bit per chip connection
  • GDDR3 at 900 MHz clock speed (January)
  • GDDR4 at 1.1 GHz clock speed (March, revised edition)
  • Total bandwidth 115 GB/s on GDDR3
  • Total bandwidth 140 GB/s on GDDR4
  • Consumer memory support 1024 MB
  • DX10 full compatibility with draft DX10.1 vendor-specific cap removal (unified programming)
  • 32FP [sic] internal processing
  • Hardware support for GPU clustering (any x^2 [sic] number, not limited to Dual or Quad-GPU)
  • Hardware DVI-HDCP support (High Definition Copy Protocol)
  • Hardware Quad-DVI output support (Limited to workstation editions)
  • 230W TDP PCI-SIG compliant
This time around it appears AMD is going for a different approach by equipping the ATI R600 with less unified shaders than NVIDIA’s recently launched GeForce 8800 GTX. However, the unified shaders found on the ATI R600 can complete more shader operations per clock cycle.

ATI's interal guidance states the R600 will have 320 stream processors at launch; 64 4-way unified shaders only accounts for 256 of these stream processors.

Level505 claims AMD is expected to equip the ATI R600 with GDDR3 and GDDR4 memory with the GDDR3 endowed model launching in January. Memory clocks have been set at 900 MHz for GDDR3 models and 1.1 GHz for GDDR4 models.  As recent as two weeks ago, ATI roadmaps had said this GDDR3 launch was canceled.  These same roadmaps claim the production date for R600 is February 2007, which would be after a January 22nd launch.

Memory bandwidth of the ATI R600 is significantly higher than NVIDIA’s GeForce 8800-series. Total memory bandwidth varies from 115GB/s on GDDR3 equipped models to 140GB/s on GDDR4 equipped models.

Other notable hardware features include hardware support for quad DVI outputs, but utilizing all four outputs are limited to FireGL workstation edition cards.

There’s also integrated support for multi-GPU clustering technologies such as CrossFire too. The implementation on the ATI R600 allows any amount ofATI R600 GPUs to operate together in powers of two. Expect multi-GPU configurations with greater than two GPUs to only be available for the workstation markets though.

The published results are very promising with AMD’s ATI R600 beating out NVIDIA’s GeForce 8800 GTX in most benchmarks. The performance delta varies from 8% up to 42% depending on the game benchmark.

When DailyTech contacted the site owner to get verification of the benchmarks, the owner replied that the benchmark screenshots could not be published due to origin-specific markers that would trace the card back to its source -- the author mentioned the card is part of the Microsoft Vista driver certification program.

If Level505's comments seem a little too pro-ATI, don't be too surprised.  When asked if the site was affiliated in any way to ATI or AMD, the owner replied to DailyTech with the statement that "two staff members of ours are directly affiliated with AMD's business [development] division."


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: THE GUY IS LYING...
By masher2 (blog) on 12/31/2006 12:40:18 PM , Rating: 1
> "This is the email I just sent him, I am visibly irritated..."

If you're getting emotional over video card reports, you might want to relax and get out a bit more :) So what if you buy the 8800? A faster card will come out and beat it...if not in one month, then in 4-5 for sure.

> "No single graphics company has ever introduced 2 graphics generations in one year..."

Actually, this used to be quite common, back in the early days of NVidia. As technology matured, the cycles slowed down. Now, I'm not saying ATI is likely to do so this year. In fact, I find it rather unlikely. But the fact remains, its been done before.


RE: THE GUY IS LYING...
By arturnowp on 12/31/2006 2:49:58 PM , Rating: 2
New card and new architecture is diferent thing. It possible that there will be G81 or G85 by the end of 2007, but not the whole new GPU like G90 not to mention R700 or something. It happens every 2 years or so.


By KristopherKubicki (blog) on 12/31/2006 2:58:07 PM , Rating: 3
G80 was to Nov 8, 2006
G70 was June 22, 2005
NV40 was April 15, 2004
NV30 was Jan 27, 2003


RE: THE GUY IS LYING...
By masher2 (blog) on 12/31/2006 6:39:37 PM , Rating: 2
> "New card and new architecture is diferent thing"

I know that. However, in the early days of GPUS, new architectures came much faster. The Riva TNT (NV4) came out in late 1998. The TNT 2 (NV5) was early 1999. The GEForce 256 (NV20) was late 1999.

The OP's comment that "no one has ever" released two new architectures in less than a year is just plain wrong.


RE: THE GUY IS LYING...
By coldpower27 on 1/1/2007 2:26:24 PM , Rating: 2
The TNT and TNT2 would be considered 1 full generation.

The Geforce 256 with Geforce 2 GTS and Ultra being the refreshes and all be considered the new architecture in relation to that.

I would lump Geforce 3 and Geforce 4 TI technology together, then Geforce FX on it's own, the Geforce 6 and then the Geforce 7 though the Geforce 7 Series didn't add that much functionality wise compare to the Geforce 6's it was mainly performance improvements and cost benefits.


RE: THE GUY IS LYING...
By StevoLincolnite on 1/6/2007 10:50:32 AM , Rating: 1
The TNT2 core is almost identical to its predecessor the RIVA TNT, however updates included AGP 4X support, up to 32MB of VRAM, and a process shrink from 0.35 µm to 0.25 µm.

So yes I would consider it as a full Generation.

The same would be for the Geforce 256 and Geforce 2.
The Geforce 256 Was severely Memory bandwidth constrained.

And the Same for the Geforce 3 and 4. (Except for the MX cards, they were based on the Geforce 2 core).


The Geforce 2 Ultra could even out perform the early Geforce 3 Models that were released, That is until the Ti500 came about.

Oh and the Geforce 2 had Pixel shaders :) But they were fixed function, thus rarely used outside of tech demo's.

The Geforce 3 however Added Pixel shaders and vertex shaders. (1.1 I believe?) And LMA, And the last change was the Anti-Aliasing which went from supersampling to multi-sampling.

Nvidia never released a low-cost version of the Geforce 3, Geforce 2's were still selling like hotcakes.

The Geforce 4 mainly just brought Higher clock speeds, Improved Vertex and Pixel shaders, and some more re-modeling of the Anti-Aliasing.

The Geforce 5/FX was the first time Nvidia combined the 3dfx team, and there own to design the chip.
The only way Nvidia was able to remain competitive to the Radeon series, was to Optimize the drivers to the extreme, They used these "Optimizations" to reduce tri-linear quality, Shader "Hacks" and making the card not render parts of a world. The drivers looked for the software and would apply the aggressive optimizations, And people noticed that the FX picture quality was considerably inferior to the Radeon.

The FX would have been a stand-alone series, And I think everyone wishes for it to be forgotten as a mistake...

The Geforce 6 and 7 are also similar, Mainly just adjustments to the amount of pixel pipelines, and clock speeds.

The Geforce 8 however... Well cant say much the entire line-up hasn't been released, And I am sure everyone has been watching and reading it like a hawk :)

All in all, I still Like Nvidia's drivers better, they have support right from the TNT to the Geforce 8. Thats 10 Generations of 3D accelerator support right there!
And the drivers don't feel so.... clunky. Mind you they have come leaps and bounds from the Radeon 8500 Days of hell.

All in all, Nvidia have used parts of all they're graphics card line-ups. The Geforce 6 was based on the Geforce FX, but changed alot of stuff internally. When it was first released, It still used the same manufacturing process as the FX. Nvidia had alot of work to Turn the FX into what it was meant to be. The Geforce 6.


"Nowadays you can buy a CPU cheaper than the CPU fan." -- Unnamed AMD executive











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki