backtop


Print 154 comment(s) - last by Lakku.. on Apr 25 at 11:08 PM


"R600" OEM image courtesy of PCinlife
320-stream processors, named ATI Radeon HD 2900

AMD has named the rest of its upcoming ATI Radeon DirectX 10 product lineup. The new DirectX 10 product family received the ATI Radeon HD 2000-series moniker. For the new product generation, AMD has tagged HD to the product name to designate the entire lineup’s Avivo HD technology. AMD has also removed the X-prefix on its product models.

At the top of the DirectX 10 chain, is the ATI Radeon HD 2900 XT. The AMD ATI Radeon HD 2900-series features 320 stream processors, over twice as many as NVIDIA’s GeForce 8800 GTX. AMD couples the 320 stream processors with a 512-bit memory interface with eight channels. CrossFire support is now natively supported by the AMD ATI Radeon HD 2900-series; the external CrossFire dongle is a thing of the past.

The R600-based ATI Radeon HD 2900-series products also support 128-bit HDR rendering. AMD has also upped the ante on anti-aliasing support. The ATI Radeon HD 2900-series supports up to 24x anti-aliasing. NVIDIA’s GeForce 8800-series only supports up to 16x anti-aliasing. AMD’s ATI Radeon HD 2900-series also possesses physics processing.

New to the ATI Radeon HD 2900-series are integrated HDMI output capabilities with 5.1 surround sound. However, early images of AMD’s OEM R600 reveal dual dual-link DVI outputs, rendering the audio functions useless.

AMD’s RV630-based products will carry the ATI Radeon HD 2600 moniker with Pro and XT models. The value-targeted RV610-based products will carry the ATI Radeon HD 2400 name with Pro and XT models as well.  

The entire AMD ATI Radeon HD 2000-family features the latest Avivo HD technology. AMD’s upgraded Avivo with a new Universal Video Decoder, also known as UVD, and the new Advanced Video Processor, or AVP. UVD previously made its debut in the OEM-exclusive RV550 GPU core. UVD provides hardware acceleration of H.264 and VC-1 high definition video formats used by Blu-ray and HD DVD. The AVP allows the GPU to apply hardware acceleration and video processing functions while keeping power consumption low.

Expect AMD to launch the ATI Radeon HD 2000-family in the upcoming weeks, if AMD doesn’t push back the launch dates further.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Awesome
By tuteja1986 on 4/12/2007 10:33:13 PM , Rating: 2
I so can't wait. Looks like a total monster but i do hope it consume less than 250W.




RE: Awesome
By Lakku on 4/12/2007 10:41:16 PM , Rating: 1
All indications point to otherwise, unless they were able to get the process down to 65nm like they were trying. At least, no information exists from leaks that suggests anything less then 240w, with some suggesting up to 280 if used with the 8-pin connector. Who knows, but unless AMD decides to take a big price hit, as in not get as much profit per card, I can't see this not being one helluva expensive card, due to the complex PCB for the 512-bit memory interface and the fact the chip will have horrible yields.


RE: Awesome
By AntDX316 on 4/12/2007 11:49:16 PM , Rating: 1
i was reading this and i was like yea what yea whatever yea whatever then i saw 320 Stream processors? wtf! that should have insane performance


RE: Awesome
By mezman on 4/13/2007 3:51:00 PM , Rating: 2
I don't know. How can ATI possibly keep all those stream processors fed all the time? Is that memory bus wide enough or the mem fast enough? I guess we'll have to wait for some benchies to be sure.


RE: Awesome
By InsaneScientist on 4/13/2007 4:28:10 PM , Rating: 4
Between the 512-bit bus, and the rather high Memclocks, it's runored that the high end version will have roughly 135GB/s memory bandwidth.

Keep in mind, that's just a rumor.

It does make sense, though... 1050MHz DDR (probably GDDR4), or an effective 2100MHz memclock (not unreasonable at all, considering the X1950's 1GHz, 2GHz effective memclock) * a 512 bit bus = approx 134.4GB/s


RE: Awesome
By aftlizard01 on 4/13/2007 12:40:33 AM , Rating: 5
I can't remember where I read it but part of the delay is that AMD wanted to move it all to 65nm and just have a small offering of 80nm ones. Also supposably the power savings from 80-65nm is something like 30%, dropping the wattage requirements below 200watts. Of course it is all conjecture on my part with only hearsay for me to go by.


RE: Awesome
By Dactyl on 4/13/2007 3:47:09 AM , Rating: 2
The Inq is pushing that story hard and I believe their reporting.

AMD of course doesn't want to admit it because then people won't want to buy the 80nm cards.

AMD should simply be honest and open about it: mark the 80nm cards differently, let people know they're hotter/slower, and charge less money for 'em.


RE: Awesome
By kalak on 4/13/07, Rating: -1
RE: Awesome
By Spoelie on 4/15/2007 8:44:52 AM , Rating: 2
3*75 or 225 is the absolute maximum power draw the boards have, assuming 100% draw from every power connector, more is physically impossible. Your numbers are way off.

The actual power draw was more along the 200w. The cooling device was designed for a maximum of 250w. And these numbers are from the early boards, one would hope that AMD has lowered these numbers over time.


RE: Awesome
By Lakku on 4/25/2007 11:08:51 PM , Rating: 2
Except the R600 uses an 8-pin connector and a 6 pin, allowing for a maximum of 300 watts. 8-pin PCIe 2.0 offers 125+ watts I believe. I know you can probably use two six pin connectors, but whose to say to get maximum benefit (i.e. for the XTX version) you don't need the 8-pin connector?


RE: Awesome
By hrah20 on 4/12/2007 10:55:54 PM , Rating: 2
Me too, can't wait for this card, just hope they really come out this time.


RE: Awesome
By aurareturn on 4/12/2007 10:59:12 PM , Rating: 1
Don't get too excited. It all sounds good but I doubt it will live up to its hype.

They always say that "ooh, our new card can do 1 teraflop, our new card has 200gbs of bandwidth, 5x the shading power, blah blah blah". In the end, the card will probably end up around 1.8x the speed of X1950XTX.


RE: Awesome
By WayneG on 4/12/2007 11:23:31 PM , Rating: 2
quote:
by aurareturn on April 12, 2007 at 10:59 PM
Don't get too excited. It all sounds good but I doubt it will live up to its hype.

Erm what are you on about... 320 stream processors are 320 stream processors. I'll be honest I saw this and thought it was a late april fools. These specs look way too good to be true. Fact is though that we won't see the performance from this card really shine until a DX10 title comes along. The CPU that would be needed to feed this beast would be rediculous in DX9. I am soo buying this when the XTX is released. Moving from an oldie X1900XTX!


RE: Awesome
By Lakku on 4/12/2007 11:59:36 PM , Rating: 2
The CPU needed will matter in DX10 titles as well. With my current 8800gtx setup, I find situations more often then not where my quad core at 3ghz holds back the GPU, though this is less common in games like Oblivion and STALKER. With DX10 titles, many of them are going to take advantage of multi-core CPUs. Alan Wake, Bioshock, UT 3, SupCom (DX10 patch coming supposedly, but in its current form, a quad core really helps), Crysis... they ALL take advantage of dual and quad core CPUs. Hell, Alan Wake requires at least a dual core, with quad core preferred. The system is a bottle neck now a days, so 320 stream processors may not mean much, especially if it uses 240 watts and puts out 150 degrees+ of heat. As a side note, we don't know what kind of stream processors it will be using, Beyond3d speculates about it if you are technically minded. It could be that one of ATi's shader processors isn't as efficient or powerful as one of nVidia's, but time will tell.


RE: Awesome
By DingieM on 4/13/2007 4:52:27 AM , Rating: 2
Err AMD (or ATI) is at its second generation of its unified shader architecture so it has the experience and the technology to be more efficient than nVidia.
Multiple sources within AMD has confirmed that the price-tag of the 65nm parts, which are all of them (!) will drop down considerable meaning the consumer will have to pay $100 less for the same performance. Just an example.
I suspect that the 320 stream processors do have calculations with only 1 parameter instead of 4 or 5 parameters within the 48 (3 groups of 16) shader array of the Xenos inside the Xbox360. It may be true that the R600 has 64 physical shader arrays that can calculate 4 or 5 parameters at the same time, if I can describe it that way. Well if you multiply 64 by 5 you get 320! nVidia 8800GTX has 128 physical shader arrays albeit it can only calculate 1 parameter at a time. So one can say 320 stream processors is marketing talk to compare it to nVidia but still has only 64 physical shader arrays (I think).

So the mass production cards shall have 65nm low-power, low-price and very high performance. Next to this built-in crossfire, HDMI and other goodies makes this a true winner.

The major disadvantage is that the main CPU can't keep up.
That's why the games consoles are so much more efficient with main memory, hence the consoles can push their graphics accelerators much more to the limit (which especially counts for the Xenos inside the Xbox360).


RE: Awesome
By Lakku on 4/13/07, Rating: 0
RE: Awesome
By Pirks on 4/13/2007 2:41:49 PM , Rating: 1
quote:
AMD can't make a 200 to 500 million transistor CPU at 65nm
bullshit, AMD manufactures 65nm brisbanes for several months now :P OTOH I agree AMD won't make R600 65nm for a while, just because capacity is not there yet - all AMD 65nm fabs are busy with cranking out brisbanes, I heard (correct me if I'm wrong) so any new 65nm production lines AMD gets will immedietaly go brisbane way, not R600 way, so forget about 65nm R600 for a while, my guess it's a year from now


RE: Awesome
By Goty on 4/13/2007 4:50:37 PM , Rating: 4
AMD isn't going to handle the manufacturing of R600 right now, it's still being outsourced to TSMC.


RE: Awesome
By SilverMirage on 4/14/2007 3:07:37 PM , Rating: 3
Correct me if I'm wrong, but the manufacturing process for Brisbanes (SOI) and R600 (something else...I'm pretty sure) are substantially different to the point that even if they were both 65nm, the Brisbane fabs would have to completely change the equiptment they have.

This is the reason it's taking so long to make the AMD Fusion. It is hard to fuse AMD CPUs with ATI GPUs because the companies process of manufacturing are so different.


RE: Awesome
By Hydrofirex on 4/14/2007 3:08:32 PM , Rating: 3
Keep in mind the two companies aren't using unified manufacturing! ATi presumably still only has the manufacturing capability it had before AMD ever came along. Now, as that situation changes we just might see that AMD and ATi knew exactly what they were doing. As AMD moves out ahead in processors and retires fab space for new generations the Ati segment can come in behind and utilize the advances for much cheaper than having to outsource manufacturing like Ati has been.

HfX


RE: Awesome
By Hawkido on 4/14/2007 1:02:20 AM , Rating: 2
quote:
If AMD can't make a 200 to 500 million transistor CPU at 65nm, why would I believe ATi can make a 600 to 700 million GPU on such a process


Please keep in mind the increadibly parallel-In Order Processing nature of GPUs is much easier to shrink than The General Purpose-OoO Processing that occurs in a CPU. Look how many transistors are in the Cell processor, and look at the incredible clock rates possible, also look at the limits of such Processors. The limits in one area allow for easy shrinks and higher clock rates. This has been well documented on Daily tech and else where.

To say it is true IS hoping, you are right. To say it will never happen (or in late '08) is ignorant (I know you did not make that claim, but some are thinking it). TSMC is very capable of running the GPU production. They have the tech to do parallel In order Processing fabbing at 65nm. However I don't believe that AMD would trust TSMC to fab the newest AMD CPUs. I believe AMD will handle that in their own fabs till they stabilize the shrink and and get in the first couple rounds of major revisions.

ATI has been on the inside track with Microsoft on DirectX 10. I think they didn't rush to DX10 because they knew a secret Nvidia didn't. Perhaps before DX10 actually gets deployed, DX10.1 will be released right on it's heels., and ATI (now AMD) will be first to get the full correct implementation out. Plus ATI/AMD already knows how software programed with ~80% DX10 programming works on it's silicon, due to the XBox360 games (which are somewhere in the middle group between DX9.X and DX10)


RE: Awesome
By coldpower27 on 4/13/2007 11:21:12 AM , Rating: 2
We don't know how these 320 stream processors stack up to Nvidia's at the current time, like I have said before an impressive tech specification, but performance is unknown.

You also buy a card for today's performance and even though the 8800 GTX came out with no DX10 games it was and still is a good DX9 performer, just like the Radeon 9700 Pro of it's day.

I will wait to see if 320 Stream Processors are as impressive as they seem.


RE: Awesome
By someguy123 on 4/12/2007 11:26:34 PM , Rating: 2
uh...1.8x the speed of an x1950xtx a pretty decent leap. the 8800 isn't even that fast (except in certain situations).

plus this isn't sony we're talking about here :P aka that whole teraflop hype.

this card looks to be quite a bit faster in specs than the 8800 and much more future proof (although nothing is very much in technology) than the 8800 with the 512bit bus.


RE: Awesome
By aurareturn on 4/12/07, Rating: -1
RE: Awesome
By yxalitis on 4/13/2007 12:02:25 AM , Rating: 4
No needs to rush a DX10 card to market...as there are STILL no DX10 games to utilise it...ATI were right to delay...I bought a 8800GTS 320, and will ditch it in favour of a card with decent Vista drivers...nVidia...Where are your freakln' WHQL Vista drivers???


RE: Awesome
By GlassHouse69 on 4/13/07, Rating: -1
RE: Awesome
By FITCamaro on 4/13/2007 6:54:30 AM , Rating: 1
Funny. People worldwide seem to be saying its extremely stable and most don't see any difference when gaming vs. XP. A lot of the previous slow downs were from incomplete drivers.

But hey, what to people who've actually used it know.


RE: Awesome
By therealnickdanger on 4/13/2007 10:11:18 AM , Rating: 4
For the games I play and the apps I use, Vista has been superb! I'm extremely happy with my decision to load Ultimate. XP is wonderful, but after using Vista for a while, XP feels dull and sluggish. Perhaps it is the magical SuperFetch of Vista or the slick interface, but I just enjoy using Vista and honest to God (hand on the Bible), it is faster. Games are smoother, maps load quicker, everything is easier.

I'm quickly becoming a fan of Vista.


RE: Awesome
By JimFear on 4/13/2007 10:28:47 AM , Rating: 3
The good thing about vista is that DX is integral to the OS this time, I've been using it since november and aside from a few oddities its been fantastic.


RE: Awesome
By eman 7613 on 4/14/2007 3:59:55 PM , Rating: 1
ive got all 3/4 OSs in my house (xp, mac, and unbuntu(linux) and used to have vista. Generaly, the reasons to get xp were you can play games on it. Xp and mac tend to run much faster and cleaner imo, and using wine i can run games using dx9 just fine on each. Infact, i can een play my favorite old game lemmings (which farts up strange colors on xp) just fine on them. So thats realy mute. I expect, one dx10 proves itself viable, wine will get to work on runing dx10 on the pengiune and its cousins too.

The other reason, is aero (which personally i don't like) but you cant exactly customize it. If you want to see how a 3d desktop should be implemented, watch beryl on youtube!


RE: Awesome
By Pirks on 4/13/07, Rating: -1
RE: Awesome
By aurareturn on 4/13/2007 2:06:08 AM , Rating: 2
The 8800GTX release wasn't much about DX10. Everyone knew that there won't be any DX10 games until towards the end of 2007.

The point is, ATi is at least 8 months late, they better deliver some crazy performance.


RE: Awesome
By someguy123 on 4/13/2007 12:56:13 AM , Rating: 2
the 1.8x leap didn't take place in october with nvidia either...it was more of a 1.3~1.5x in certain situations "leap". yes it was quite a bit faster than the fastest solution out, but it was in no way a GIGANTIC leap foward. The truth is noone knows how this thing will perform, but if the specs really are as good as they say it will be then the only thing that can hold back this card from being an amazing performer are the drivers.

like someone said before, stream processors are stream processors. unless they are ridiculously underclocked to meet some heat requirements they will perform better than the g80 line.


RE: Awesome
By aurareturn on 4/13/07, Rating: -1
RE: Awesome
By someguy123 on 4/13/2007 2:21:40 AM , Rating: 3
i thought u were trying to bring up a real arguement but i guess you're just a fanboy...


RE: Awesome
By tdream on 4/13/07, Rating: -1
RE: Awesome
By DingieM on 4/13/2007 7:16:35 AM , Rating: 2
Well don't complain when the R600 kicks your 8800GTX's butt and at a -highly expected- much lower price, even more for DX10.
What do you call a nice game?


RE: Awesome
By AndreasM on 4/13/2007 8:43:01 AM , Rating: 3
He won't, as he will have had a new card for 6 months already, which is a long time in computer terms. It would be bad if the R600 wouldn't be able to beat such an old GPU.


RE: Awesome
By kalak on 4/13/2007 11:26:46 AM , Rating: 2
quote:
I only want the best that technology has to offer at that moment in time.


Do you want THE best ? So, certainly worth the wait....


RE: Awesome
By leexgx on 4/15/2007 6:24:00 PM , Rating: 2
i agree with some of his comments

Drivers was the only thing that stoped me ever buying an ATI card and still has an little (never had any problems up to vista as drivers for that OS are bad aka why i am dual booting readly for DX10 games)

ATI i beleave are an little more CPU frenldy then Nvidias drivers

it will be Very intresting when ATI/AMD bring there new card out

8800GTX single is faster then the x19xx cards by alot its limiting factor is the CPU this is due to DX9

when the ATI 2000 cards come it be intresting card 320 Steam cpus seems an little high but makes intresting numbers

when the 8800 came out i know it was going to wipe the floor with all of there older cards when thay bring out there lower end ones thay mayhad just Stop makeing the 7xxx cards and lower be intresting an 8100 or an 8200 cards as thay should be farely powerfull compeared to the older cards 7200


RE: Awesome
By coldpower27 on 4/13/2007 11:17:27 AM , Rating: 2
You can't compare across different architectures without more data, 320 stream processors sounds impressive, but 48 pixel shaders on Nvidia side wasn't 2x as fast as Nvidia's 24 last generation.

Without more data it still is hard to know how this card will perform. 320 Stream Processors is an impressive tech specification for now but until we see how it performs, that is all it is.


RE: Awesome
By coldpower27 on 4/13/2007 11:22:01 AM , Rating: 2
"48 Pixel Shaders on the ATI side"


RE: Awesome
By InsaneScientist on 4/13/2007 4:38:22 PM , Rating: 2
Depends on the application...

Those 48 shaders on the ATI side will utterly erase nVidia's 24 in something like Oblivion, and keep pace with them pretty well in just about everything else.

You're right, though... we won't be able to tell without actuall benchies... this might be another case of fewer shaders that are more complex, versus more, simple ones. One will likely do well in one application, whilst the other will do well in others.


RE: Awesome
By coldpower27 on 4/13/2007 10:06:22 PM , Rating: 2
Not on a per basis no, they are less efficient each even in Oblivion compared to Nvidia's they are basically even outside of foilage where ATI has a significant lead, on the order of 40% they are dead even in Oblivion.

I agree that different architectures have different strengths.


RE: Awesome
By InsaneScientist on 4/14/2007 6:04:40 AM , Rating: 2
Huh?

I think you just said the same thing I did, just in a different way...

The point I was making is this: ATI has more shaders, but they're simpler, can do less work per cycle, less efficient - as you put it.
nVidia's got fewer shaders, but they're more complex and can handle processing more in a single cycle.

Overall, it tends to balance out, though there are a few cases (such as Oblivion, like I mentioned) where one architechure handles the code better than the other.

Are we agreeing? Or are you saying something totally different? :-S


RE: Awesome
By coldpower27 on 4/13/2007 11:38:04 AM , Rating: 2
The leap was 1.4x-2.0x in 16x12 and above with resolutions with AA and AF applied and this is compared to the X1950 XTX. It simply depends on what your looking at, TechReport shows different result then Anandtech on this. If you compare to the 7900 GTX, then the performance improvement would be slightly better.

There is no use comparing the X1950 XTX to the 8800 GTX in 10x7 or 12x10 as both card will provided playable performance at such low resolutions. Unless your talking about something new like G.R.A.W which the X1950 XTX get an average of 50 FPS on at 12x10.


RE: Awesome
By Lakku on 4/13/2007 1:10:41 PM , Rating: 2
So 50fps isn't playable enough for you?


RE: Awesome
By coldpower27 on 4/13/2007 10:42:33 PM , Rating: 2
Well since the 8800 GTX gets 91.6 FPS at the same settings suffice it to say, no 50FPS won't do as now I can crank up the resolution and play 19x12 at more or less the same FPS. So the 8800 GTX represents a considerable improvement.


RE: Awesome
By InsaneScientist on 4/13/2007 6:18:33 PM , Rating: 2
quote:
Unless your talking about something new like G.R.A.W which the X1950 XTX get an average of 50 FPS on at 12x10.

Oblivion with all the eye candy turned up. :D


RE: Awesome
By coldpower27 on 4/13/2007 10:38:49 PM , Rating: 2
http://www.techreport.com/reviews/2006q4/geforce-8...

Hmm at 20x15 with 4XAA HDR TrAA 16xAF 8800 GTX is about 94% faster then the X1950 XTX, isn't that nearly 2x to me, nowhere near the 1.3-1.5x some people are quoting.


RE: Awesome
By InsaneScientist on 4/14/2007 6:13:14 AM , Rating: 2
Huh?

As I understood it, you were saying that there wasn't much point in benching anything at 1024*768 or 1280*1024 since most games are CPU bound at that resolution, unless it's something like G.R.A.W.

I was just pointing out that Oblivion with all the eye candy on is something else that can tax a modern GFX card to its limits and allow the 8800 to flex its muscles, even at 1280*1024 (though probably not 1024*768)


RE: Awesome
By justjc on 4/13/2007 11:08:30 AM , Rating: 2
Actually there can't be much doubt that the steam processors on ATIs new card will be able to calculate more Gigaflops than 8800GTX, as their current x1900 family allready is quite a bit ahead. (x1900series=554GFLOPS vs 8800GTX=330GFLOPS according to wikipedia)

The big question is how they will use that power and if it will give a substantial advantage, over the 8800 GTX, when it comes to the graphics processing.


RE: Awesome
By Lakku on 4/13/2007 1:15:12 PM , Rating: 2
I don't believe that value for the x1900, though the 8800 figure is about correct. If that value is correct for the x1900 series, I hope it's for two cards in crossfire, because if not, that means a single x1900 does more gigaflops, or about the same, as a R600. The r600 gets around 1 teraflop in XFIRE, meaning TWO cards, if those reports around the time of CeBIT are to be believed.


RE: Awesome
By DingieM on 4/16/2007 6:08:07 AM , Rating: 2
The ATI stream thingie based on R580 core running on 650MHz and 1 Gb of memory reaches 360 GFlops/s for the Folding@Home program. Which will set you back $2600...
A fast Intel Quadcore running 2.66 GHz can barely do 85 and an IBM Cell 200.

I think the R600 can be many times faster than the R580 but I'm guessing a multiple of 2 times faster.


RE: Awesome
By goatfajitas on 4/21/2007 12:59:17 PM , Rating: 2
true... we need to wait for final release and independant benchmarks. until then, its all hype.


RE: Awesome
By Alpha4 on 4/12/2007 11:43:15 PM , Rating: 1
quote:
I so can't wait. Looks like a total monster but i do hope it consume less than 250W.
HAHA! I laughed out loud when I read this post but then I realized... He's probably right. And then I laughed even harder.


RE: Awesome
By DingieM on 4/13/2007 4:55:18 AM , Rating: 2
I really think it does use less than 250w because its on 65nm.
There are still some 80nm parts left but I wonder how they are gonna drop these "obsolete" parts...


RE: Awesome
By Targon on 4/13/2007 11:44:07 AM , Rating: 2
That's simple, they will sell them to OEMs since most people who buy OEM computers don't swap out parts.


RE: Awesome
By Mithan on 4/13/2007 1:50:52 AM , Rating: 2
Same. I will consider buying one, but not if my OCZ 620 cant handle it.


RE: Awesome
By muzu x2 on 4/13/2007 2:17:14 AM , Rating: 2
looks good, monster is coming...


RE: Awesome
By otispunkmeyer on 4/13/2007 3:52:14 AM , Rating: 2
gibbo from overclockers.co.uk has gotten a R600 XT card for testing. he is under NDA, but from the looks of it he's had to use 2x6pin connectors for the card.

i dunno where that puts the power consumption, but im guessing its over 200W

however he did say that the card is very quiet in operation, and doesnt burn your fingers to the touch.

he also mentioned that openGL performance was pretty damn good, and that DX performance is a little patchy due to immature drivers. he thinks the drivers in vista arent too bad either.

i cant wait for these to arrive! HURRY UP MAN!

i'll be looking at the 8600GTS vs HD 2600XT probably, or perhaps the 8800GTS 320/640 vs whatever AMD have lined up to combat that.


So I'm guessing...
By DeathBUA on 4/12/2007 10:50:21 PM , Rating: 2
These Stream Processors are simpler than the Nvidia ones?? As in nvidia used scalar stream processors and perhaps ATI/AMD is using vector shader units....???

Either way finally getting some competition is a good thing!




RE: So I'm guessing...
By Pitbulll0669 on 4/12/07, Rating: -1
RE: So I'm guessing...
By kiwik on 4/12/2007 11:16:35 PM , Rating: 4
NEWS FLASH! There are no DirectX 10 games ANYWAY!

BTW, I own an X1900XT so don't call me a fanboy.


RE: So I'm guessing...
By aftlizard01 on 4/13/2007 12:37:24 AM , Rating: 2
I thought the last Flight Simulator was DX10? Either way, it doesnt matter you will still see a marked improvement in DX9 games also.


RE: So I'm guessing...
By Doormat on 4/13/2007 1:28:37 AM , Rating: 2
No, supposedly it will be at some point, but it'll probably be an expansion pack. After seeing MS price the GH song packs at $2/song I really dont think they'd give away that upgrade from DX9 to 10 for free.


RE: So I'm guessing...
By BikeDude on 4/13/2007 1:51:44 AM , Rating: 2
They will release a DX-10 patch later this year, but at the moment FSX is not utilizing DX-10.

Before that happens, they'll release a service pack that will help FSX take advantage of multiple cores. SP1 is in beta and the DX-10 patch will not appear before automn I think.


RE: So I'm guessing...
By FITCamaro on 4/13/2007 6:56:56 AM , Rating: 1
I don't believe DX10 is even complete yet. Much less any games using it.


RE: So I'm guessing...
By DingieM on 4/13/2007 10:37:24 AM , Rating: 1
The Inquirer tested FSX but they were hugely dissapointed in the piece of bloatware M$ called Flight Simulator X.
They found it terribly slow. But maybe its just because it wasn't finished and/or optimized?


RE: So I'm guessing...
By InsaneScientist on 4/13/2007 6:21:08 PM , Rating: 2
Uhhh... DX10 is definately complete... it has been for a while - like, since Vista was released. ;) :D


RE: So I'm guessing...
By eman 7613 on 4/15/2007 4:12:53 PM , Rating: 2
then why cant anyone seem to get anything working correctly under it? the only thing we have seen work is DX9L, the vista version of dx9. As i said earlier, dx10 has yet to be proven viable, it may look good on paper, but so did ME ;p


RE: So I'm guessing...
By AnotherGuy on 4/13/2007 1:21:54 AM , Rating: 5
lol this guy sounds like 13 years old... 14 tops :)


RE: So I'm guessing...
By nerdye on 4/12/2007 11:13:42 PM , Rating: 4
Yes competition is a good thing, this is what all of us enthusiasts have needed, competition on the high end of the gpu spectrum. Remember the x1900xtx that came out on top of an already dominant 7900gtx? Remember the x1950xtx that came out on top of the already dominant 7950gtx2 (well alteast as far as one peice of pcb goes that is), now are we seeing the same thing, ATI winning for a half a generation? This is pure speculation but hell, I don't care, competition is good for our wallets fellow hardcore hardware peoples, this calls for celebration, finally!


RE: So I'm guessing...
By Spoelie on 4/15/2007 9:02:12 AM , Rating: 2
ahum, that's not really possible what you're saying.

Scalar stream processors work on one piece of data at a time, you can't really make them simpler (a processor that works on 0 pieces at a time is less complex, but not really all that usefull :p), only make them work at a slower clock speed or have an add or multiply take more cycles.

The definition of vector stream processors is just that, they work on vectors of data, so multiple pieces of data.. Meaning that a vector processor array would be even faster yet less flexible. 320 vec4 stream processors would be out of this world.


Shuffling the (paper) deck again....?
By kilkennycat on 4/13/2007 2:35:00 AM , Rating: 3
Interesting timing for this 'leaked information' 3 days before the mass-market nVidia Dx10 cards are due to hard-launch. Notably the usual 'paper-pre-launch' characteristics....... no ship dates whatsoever.

Also, please note the following points:-

(a) a higher numerical count of a named entity ("stream processor" or "memory bus width") does not automatically mean better... the devil is truly in the implementation, both in the hardware design details and the efficiency of the associated driver software. Wait for the test-reviews from "trusted sources" such as Anandtech.

(b) nVidia is currently rolling the second-generation design of the 8800-family with high-precision math to truly satisfy its intended dual-function as a GPU and a general-purpose parallel-processing number-cruncher. Likely on 65nm and enhanced all-around. Release expected before end-2007. Maybe earlier if the R600 becomes a threat ?

(c) If the current R600 is not implemented on 65nm but on 80nm, the production run will be very limited - very low yield, huge power-consumption. There is a rumor that it is indeed on 80nm and due to poor yield, only 20,000 will be available in the current form. Maybe the board pictured will only made available to developers and never shipped. Or if shipped, those who purchase them may look very sad very quickly. Remember the nVidia FX5800 and the ATi X1800 sagas.....

(d) Competition at the high-end is truly-excellent news for the budget-conscious 'enthusiast'. I personally would drool over a 8800GTX/768Meg ( or two !) being available for ~ $300 each if the ATi product really is overwhelmingly competitive at the high end and arrives near the current GTX price. I personally do not need to drive a 2560x1920 display at hundreds of FPS, with upteen dimensions of AA, assuming that the CPU (multicore or not) can hack the data-delivery to the GPU. Seems as if every "enthusiast" computer user with these requirements has super-human visual acuity, or a massively-large super-high-res screen, or is ultra-myopic and being 2 inches from the screen can seen every pixel-blemish.




RE: Shuffling the (paper) deck again....?
By Kougar on 4/13/2007 4:07:47 AM , Rating: 4
A) That is true. However comparing all the details already known makes for a very compelling case for the 2900 to be a vastly superior chip. The core clock is higher, there is twice the RAM, a twice as wide memory bus, and finally there is more than twice the shader count just to name some quick examples. Also if past history is any indication ATI has usually developed "more powerful" double-precision cores, which let them run HDR+AA where nVidia's G79 can't. F@H results using nVidia's best G79 hardware is another example. The 16x AA vs 24x AA will soon be another.

B)What second generation is this... G89? I'd be truly amazed if AMD launches K10 before R600 , which means "end of 2007" will be over half a year away. Perhaps nVidia can manage to catch up with current, mostly bugfree drivers for ALL of their cards in this timeframe. Things may not be that pretty when MS finally releases DX10 and Nvidia must respond with new drivers for every series of cards all over again! F@H can't even use G80 not due to hardware issues, but due to to many bugs in Nvidia's code and driverset, this was stated by their GPU team in their forums.

C)While the production yield may be higher with 65nm, not to mention better power characteristics... you need to remember your favorite 8800GTX card is a 90nm based product, which means ATI on even a 80nm process still has the edge here. Also there are two boards of R600, one of them being 9.5" with a less spectacular thermal/power envelope. That is not a large card, considering it will be the same size as a 8800GTS and actually smaller than a 8800GTX.

D)Just because you don't have any use for the hardware doesn't make it worthless. Some people have the money for 3007WFP-HC's, or are willing to make the money to get them. Another thing is anyone can do F@H work with x1600, x1800, and x1900/1950 class cards, so rendering games is not the only incentive to own an R600. Again if you see no reason to run distributed computing projects then that does not instantly make the hardware useless. If you have no reason to be driving a 30" display with maximum effects, then owning two (more expensive) G80 cards would be a bit silly anyway.


RE: Shuffling the (paper) deck again....?
By Ard on 4/13/2007 10:52:57 AM , Rating: 3
A. The only advantage you can take away from all these "numbers" is that ATI will have significantly more bandwidth. That's it. Having twice the stream processors means nothing if they're simpler or clocked slower. 16xAA vs. 24xAA is a moot point.

B. NVIDIA won't have to respond with new drivers once DX10 titles are released. What do you think they've been working on all this time? Damn sure isn't DX9 that resulted in them rewriting 20 million lines of code.

C. 80nm doesn't offer any advantages over 90nm. In fact, it's actually worse from a power consumption point of view in most instances. Much like 110nm vs. 130nm, 80nm is cost saving measure, nothing more. And the 9.5" board you speak of still pulls down significantly more power than either the GTS or the GTX. Why they even need a 12" board is beyond me but it doesn't bode well IMO.

D. If you're spending $500+ on a GPU to fold, there are other issues going on.


By Kougar on 4/13/2007 5:20:36 PM , Rating: 2
A) I don't think ATI's announcement of 24AA is a moot point. They would not create a new higher level of AA unless their hardware could give it a good showing, otherwise they would be shooting themselves in the proverbial foot.

B)Honestly, I'm not really sure anymore. Their G79 drivers date back to November 2nd of last year, and it took until April 5th for them to release a 2nd G80 driver for XP. Are you certain when Vista changes from DX9L to a full DX10 their Vista drivers will still work and not need a refresh? I'm not so sure they won't, but I'd be happy to be wrong here.

C)80nm is still an advantage, however insignificant. I merely pointed it out to refute the other's point here. ATI always has a history of producing more power hungry GPUs that outperform nVidia's... I still feel R600 will take the performance/watt lead, and even if those are very weak 320 shaders then the 65nm R650 is a given, probably in time for G89. The huge board seems to be a pretty smart way to sell lower quality cores that require higher voltage to run at the same speeds, and sell them to OEM's who generally won't care since they don't tend to sell their PC's in windowed cases.

D)You missed the point, or simply ignored it. I'm spending $500 for the best GPU, higher power consumption included, to power a 24" monitor and also run F@H. My point here was R600 has multiple uses, so far G80 still does not.

I also want to try a non-nVidia card in this sytem since CoH, which is "Best Played on nVidia Hardware" and loves to advertise that fact, artifacts galore on any and all nvidia hardware. It's pretty poor games designed to run on nVidia hardware can't even do that right and I'm tired of the artifacts. Reformats, 4 different nVidia cards, and three driver sets didn't make a bean of difference. Switching to ATI will answer if is either a poorly coded game, or some game issue with nVidia hardware/drivers.


RE: Shuffling the (paper) deck again....?
By yacoub on 4/13/2007 8:10:46 AM , Rating: 5
NVidia lost interest for me with the last two news items:

-The 8600GTS barely keeping par with a 7900GT in most circumstances.

-The 8900GTX (or whatever it's being called) hinting at $999 MSRP.

Way to completely lose the initiative in the battle, NVidia. Weaker than expected performance for your mainstream-level cards and continuing the trend of over-charging for your top-end flagship card.

Meanwhile ATi is hinting at better performance at an equal or lower pricepoint than the currently available and soon to be released NVidia products.

That gives us every reason to be interested and excited about these cards.


By coldpower27 on 4/13/2007 11:44:32 AM , Rating: 2
If ATI's cards are as competitive as some hope for them to believe, then they will bring Nvidia's prices down, even the rumored 8800 Ultra assuming AYI beats it of course.


By smut on 4/13/2007 1:14:50 PM , Rating: 2
8600GTS cards are out and available to buy in retail??


damnit
By medavid16 on 4/12/2007 11:28:18 PM , Rating: 2
just when I got a 8800GTX.

The antialiasing, i could live without, 16x vs 24x... that's just a pissing contest on paper. Where the real money lies is the 320 stream process. If the early nvidia offerings shows what this new process has to offer, this just may deliver the hype, at least close to it. Only time will tell.




RE: damnit
By jlanders646 on 4/12/2007 11:42:47 PM , Rating: 2
I've been an ati fan boy from the beginning, ah the good old dayz of my 9200se pci edition :) Some how I ended up with a 6600gt as of now, but I've been waiting for this card. Ati has ALWAYS done me right. I will buying the first one I can see!!


RE: damnit
By Alpha4 on 4/12/2007 11:58:24 PM , Rating: 2
Heh. The performance leap in your case will be incomprehensible. Are you sure the rest of your hardware is up to the task?


RE: damnit
By tuteja1986 on 4/13/07, Rating: -1
RE: damnit
By JoKeRr on 4/13/2007 12:54:17 AM , Rating: 2
Isn't G80's stream processor clocked at like 1.3+Ghz??? Wonder what will be the clock speed of HD2900's stream processors.


RE: damnit
By jlanders646 on 4/13/2007 2:13:24 AM , Rating: 2
I've been saving for two yrs waiting on this vid card, so yes it will be an insane difference from a 2800+ athlon with a 6600gt to this card, I also want the 3.0ghz dual core amd processor! amd and ati fan boy sorry!


RE: damnit
By Munkles on 4/13/07, Rating: -1
RE: damnit
By mm2587 on 4/13/2007 10:22:55 AM , Rating: 3
while I don't doubt your issues, I find it amussimg you had issues with your x2 machines, but no problems at all with the 680i, boards notorious for their amazing performance and their headache inducing bios issues, and the 8800gtx, which has caused the biggest driver fuss I've seen since ati introduced ccc.

oh and amd has not been underperforming for the last two years. The year and half or so since conroe, yes. Claimming the six months prior is simply laughable.


RE: damnit
By Munkles on 4/13/2007 11:09:30 AM , Rating: 2
mm2587,

A few points I would perhaps like to clarify.

1) Yes people have had a lot of issues with the 680i boards, but the ONLY thing I had to do to get my computer running (from the bios anyway) was enable my SATA. Honestly no other configuring, no changing ram voltages, memory timings etc. It all worked right out of the box. That said I havnt yet over clocked it because at the momenet its only being used for WoW, CC3, and Stalker and since it plays all those games without any framerate hiccups I see no reason to tinker around with my clock.

2) When I said underpreforming I was encompassing both the les' faire attitude they have had to the processor market since the initial release of the x2. There have been no major revisions, and they have NOT been upfront with consumers about their future plans other than to take the "just you wait and see!" Intel on the other hand has been telling us every move that they plan to make, and the expected performance gains therein, and delivering said products to market in great quantity allowing system builders to make good, informed decisions.

on the ATI side, they have done the exact same thing. You see many promises but nothing delivered. Ive had nothing but headaches with my ATI drives. They have at some point conflicted with almost every component in my computer and its not from a lack of buying quality components. So when I said underperforming I wasnt strictly speaking of instructions per cycle and overall clock. I was referring to primarily their failure to treat me, a LOYAL customer and consumer properly. They did no respect my dollar and now they lost it.

I have no ill-will against the two, its just at this point they need a big an overhaul as Intel did back in the p4 days.


RE: damnit
By mm2587 on 4/13/2007 2:24:02 PM , Rating: 2
fair enough. you make valid points. I don't think amd was prepared for conroe, or more importantly the redefined intel we've been seeing these last 18 months.

On the nvidia-ati side of things I'd have to say my peronal experence has been the opposite of yours. I've had 2 bad motherboards(I haven't gtten the chance to play with a 680 yet though), a 7900 that wasn't stable out of the box (failed deep freeze) and my replacement 7900 has been nothing but headaches. (I still can't get stalker running smoothly, though I'm, sure thats more of an issue with the game itself, which is an entirely different topic)


RE: damnit
By Spoelie on 4/15/2007 9:10:16 AM , Rating: 2
Conroe has been available since august, less than a year ago. No one and a half year ago.


RE: damnit
By InsaneScientist on 4/13/2007 2:32:53 AM , Rating: 2
Yep.

1.35GHz for the 8800GTX and 1.2GHz for the 8800GTS


Antialiasing is just getting stupid
By Madzombie on 4/12/2007 10:46:03 PM , Rating: 4
So they've now pushed antialiasing up to 24x? I struggle to see the difference when AA is increased above 6x (or 8x in the case of nVidia) so this just seems like a pointless waste of processing power. As this is targeted at the enthusiast market most users will already have very high resolution screens, so improvements in AA will be even harder to detect due to the small sizes of the pixels.




RE: Antialiasing is just getting stupid
By tekzor on 4/13/2007 12:06:37 AM , Rating: 2
I wish we would get improvements like that in the laptop graphics department. Something along the lines of a mobile GPU that can give at least AA4x and AF6X without large size wise.


RE: Antialiasing is just getting stupid
By GlassHouse69 on 4/13/07, Rating: -1
By James Holden on 4/13/2007 12:29:22 AM , Rating: 5
RE: Antialiasing is just getting stupid
By therealnickdanger on 4/13/2007 8:33:32 AM , Rating: 2
My 17" 1920x1200 LCD on my laptop can see AA improvements quite well, thank you! I erased XP MCE and loaded Vista Ultimate along with some hacked 100-series NVIDIA drivers (since my GeForce GO 7800 is a very strange duck and isn't supported by conventional drivers). My games run measurably faster under Vista and with the new drivers.

What is really cool is that the new drivers have a feature that specifically allows me to set the native resolution of my display and then choose whether I want the LCD's scaler to interpolate lower resolutions or allow the GPU to scale lower resolutions to it. It is not AA since it doesn't render more pixels in 3-D, but it results in jaggy-free image, minus the performance hit. They have essentially built in a very powerful video-scaler into the driver aimed at LCD displays. I only mention it because of how sharp it is, I can barely tell the difference between it and actual AA, except I don't drop any frames. Very cool... and over year old...


RE: Antialiasing is just getting stupid
By dice1111 on 4/13/2007 10:08:28 AM , Rating: 2
What are these drivers called so I can have a look for them?


By therealnickdanger on 4/13/2007 3:42:17 PM , Rating: 2
It's really odd, the GF GO 7800 is an exceptional card, but it was really OEM specific, so much so that NVIDIA doesn't offer a driver on their website. You basically can choose between Dell's driver (slow clocks, no OC, not the best performance) or a hacked driver that gives you all the modern benefits (OC, deeper 3D settings, PureVideo tweaks, way better performance).

The driver install I went with was the only 100-series WHQL driver present for Vista. It's a three part method.

1. Download and extract the 100.65 driver:
http://www.laptopvideo2go.com/nvidia/100series/100...

2. Downloaded the modified INF file that contains all the cards that are not officially supported by the release and replace the nv_disp.inf in the extracted folder:
http://www.laptopvideo2go.com/infs/100series/10065...

3. Execute the setup file and proceed as normal.

Another set that I have used in the past are the XG drivers which are modified by Tweaks'R'Us. These drivers have a TON of features and if you don't know what you're doing, you could vaporize your GPU... LOL! I was able to OC my GO7800 from 250/650 to 412/890 with only a 4C increase in load temps.
http://www.tweaksrus.com/


Let me remind all of you nay-sayers about something...
By Goty on 4/13/2007 9:20:31 AM , Rating: 2
Anyone remember the 8500-9700 gap? Yeah, NVIDIA got a whole generation in there in between the 8500 and the 9700, and do you remember what happened? NVIDIA got its butt kicked all over the place by the 9700 for TWO generations (the GeForce 4 and FX series). I wouldn't be in the least bit surprised if the same thing happened again. ATI has always has good reasons for being late (well, with the exception of the X1800 series).




By Goty on 4/13/2007 5:09:39 PM , Rating: 2
I need to learn MY history?! The GeForce4 series was an entire generation ahead of the 8500, of course it ran circles around it! If I recall correctly, the GeForce4 series was still on sale and then FX series was nowhere to be found when the 9700 was released, so yes, the GF4 and FX cards both got stomped.

As for ATI's inability to execute over two generations, of which you only mentioned one, The R520 was the only late GPU. R580 was developed by a completely separate development team and was on time. The R580 also had no problems competing against any SINGLE video card. You can spout on and on about the 7950GX2 (isn't it convenient how everyone forgets the whole 7800GX2 debacle?) and I'll say, yes, it was a superior performer, but it was two GPUs on two PCBs, not one and one, and the image quality still suffered just as with every other NVIDIA GPU of the generation.

You go talk about NVIDIA having top-to-bottom DX10 coverage, but can you show me a single low-range G8x card in the retail channel? Nope, guess you missed a market segment there, buddy. Also, yes, NVIDIA's got a refresh waiting for the release of the R600, but have you heard the price point? It's expected to be around $1000. I'm sorry, but that's not a direct competitor to a card that's expected to cost half as much.


By coldpower27 on 4/13/2007 10:29:04 PM , Rating: 1
The Geforce 4 was only a refresh to the Geforce 3 and was based on the same technology. ATI was late that generation as well as the Radeon 8500 showed up round the time of the Geforce 3 Ti500.

Geforce 4 did indeed beat the Radeon 8500 easily as ATI never released a refresh. And that was for quite sometime until the release of the Radeon 9700.

In order to actually beat Nvidia, ATI had to skip a generation entirely, and go to the next full generation.
Radeon 9700 was never meant to compete with the Geforce 4 of course it would beat it, it's designed for DX9, even the Geforce FX 5800 with it's own issues beat the Geforce 4 Ti, since it is the next generation.

After the Geforce FX generation Nvidia started firing on all cylinders and executing beautifully, Geforce 6 fixed all Shader Model 2.0 issues, and brought Shader Model 3.0 in the top to bottom family, all the way to budget. Later on this generation Nvidia introduced SLI which doubled performance once again and couldn't be touched by ATI for quite sometime.

Geforce 7800 GTX was released months earlier compared to the competition having what is enjoyed by the Geforce 8800 now time to market advantage. They had the lead for several months, until finally in October the X1800 XT was released, whcih proved quicker eventually as the drivers matured.

There was no debacle with regard to the 7900GX2, that was meant as an OEM card only, and never meant for the retail channels, the 7950GX2 was the card meant to be for DIY. ATI had the best Single GPU card, while Nvidia had the best Single Slot on the Motherboard solution.

ATI's card while having a superior AF implementation at the time, also drew alot of power compare to Nvidia's equivalents. What Nvidia had going for it this generation was cheap production costs and lower power consumption which means more cash flow for them. So the Geforce 7 series was executed fairly well.

The R300 situation was a one time occurence and will likely never again happen as Nvidia hasn't screwed up with the 8800's. ATI has to be better this time around being so late to market, or having better pricing to compensate for being so late.


By KernD on 4/14/2007 9:49:49 AM , Rating: 2
The Geforce 4 was not a refresh of the Geforce 3.
GF3 is PS v1.1 while the GF4 is PS v1.4.
It not the same chip with some minor tweak I think.
The real refresh that hapenned are the GF1->GF2->GF4MX and GF6->GF7, GF7 has some difference in the shader units but they are only instruction optimization they made based on shader profiling.

And as far as I know GFFX had no issues with SM2? If you mean the poor performance then you should know that was mainly caused by the fact that is had only 4 pixel shader unit but with 2 texture sampler each, while the Radeon 9700 had 8 PSU.


By Ard on 4/16/2007 2:09:02 PM , Rating: 1
The differences btw PS1.1 and PS1.4 are minor at best. The GF4 was in fact a refresh of the GF3, or are you disputing AT's article?

GF1 -> GF2
GF3 -> GF4
NV30 -> NV35
NV40 -> NV45 (on-die bridge)
G70 -> G71
G80 -> G81


By Ard on 4/16/2007 2:03:58 PM , Rating: 1
So let me get this straight, you harp on about the GF4 being an entire generation ahead of the 8500 and then go on to laud the greatness of R300, which was 2 generations ahead? Riiight...if it makes you feel any better, the GF3 blasted the 8500 as well.

Sorry, but ATI has completely failed to execute for 2 generations: the R520 generation (which includes R580 and R580+, their third bite at the apple) and the R600 generation, which is going on 8-9 months late as we speak. The 7800GX2 is not even worth discussing because it's irrelevant to the conversation and was already taken care of below.

And please, don't quibble with semantics. The 7950GX2 was the fastest single card available, end of story. You can whine all you want about how there were 2 GPUs, but at the end of the day it was a single card competing at the same price point and that's all that really needs to be said.

Finally, take the time to read my post before you comment. Nowhere did I say that NVIDIA has a top-to-bottom DX10 lineup NOW. I said they are preparing one whilst ATI continues to delay R600. As far as NVIDIA's refresh is concerned, you think I'm going to believe the Inq's FUD regarding the price? They don't even try to disguise the fact that they're blatant ATI fanboys, much like those who rated my previous post down.


By eman 7613 on 4/16/2007 8:48:01 PM , Rating: 2
Everyone is flaming the r600 for being late, but i can see a lot of logical reasons for doing so.

1. Nvida has yet to push out some decent dx10 drivers yet, they a big cooperation with experienced programs, so chances are AMD is having the same trouble. Coming out and saying "Look, here is our new dx10 card! Just, don't run it on vista cause we don't have any working drivers!" would not be taken well at all.

2. To my dismay, cause i like em, ATI was famous for these damned paper launches. Regardless of lateness, a new card that is launched with cards in stock and aviable will make them look much better then an onetime paper launch.

3. Reduce production costs and finalize the product. You can always improve on what you got, and we all know how fast technology goes and gets better once it is here.

4. Game testing, if you had a 8800 around launch time, you got to deal with wonderfully flawed drivers that made many new games spew out funky graphics that were missing textures and skewed in some way. No one wants to pull a bill and unvail their Grand new game and stand in horror as the BSOD is there instead.


By Talin on 4/13/2007 12:02:18 PM , Rating: 2
I'll admit, I'm NOT an ATI fan. Mostly because I've never been impressed with the QUALITY of the graphics they produce. One of the biggest reasons I select a video card, besides performance, is how the images appear on the screen. Performance may be the be-all, end-all, for many of you, but personally, I could care less what my 3dMark06 score is, if the graphics look like crap.

One thing I have to reiterate, that others have tried to say, is that the number of channels, amount of memory, etc, don't necessarily equate to "better". If I run an 8bit processor with a 1ghz core against a 16bit processor at 500mhz, you can get the same performance, plus, give that 16bit processor a small instruction set, and you can actually output more data on the back side.

I'm not saying this card won't smoke the Nvidia 8800GTX, but I've also not seen any actual Price quotes to justify the raves about how much cheaper it will be than the 8800GTX. And while you're at it, let me say that I've been enjoying incredible frame rates in my online gaming, as compared to anyone I play with who has anything other than an 8800. I typically get 30-50% more fps than they do, which has caused some friendly insults to be bantered about, about my lack of lag. So while you are all waiting for your R600, I've been happily playing day-in and day-out for MONTHS and months, without having to wait for vaporware, so enjoy your wait, I'll keep playing and smoking the competition till your new toy comes out. Then if it really does what it says, I may get one of those too, and put my OLD 8800GTX in my spare PC.




By eman7613 on 4/13/2007 1:32:48 PM , Rating: 2
i got an 8800gtx because i heard of the higher quality, but i got to disagree. Its w good card, but it much larger hits to aa then my ati card did and the quality is far worse (id compare 6x on ati to slightly better then 8x quality on my nvida ) and the AF much worse in general.

Personally i wish i had waited for this to come out then getting an 8800


By coldpower27 on 4/13/2007 10:31:37 PM , Rating: 2
Most people would disagree with AF. Nvidia current AF implementation on the Geforce 8800 is superior to even that implemented on the Radeon X1800/X1900 Series.

But in the end image quality is an subjective issue.


By Goty on 4/13/2007 4:56:10 PM , Rating: 2
Can you sell me a little of what you've been smoking? It's only with the release of the 8800 that NVIDIA has come anywhere near ATI in terms of rendering quality.


Folding@Home
By Kougar on 4/13/2007 2:50:56 AM , Rating: 1
48 shaders vs 320 shaders, 256bit wide mem bus and 512mb of GDDR4 vs 512bit wide bus and 1gb of GDDR4, 800mhz+ core clock speeds, and last but not least an unknown stream processor clock speed.

I have to wonder if even a Core 2 Duo core will be enough to keep the ATI 2900 HD fed with F@H data, let alone the PPD this guy will put forth! I am so ready with my watercooling setup to do some serious folding with one of these... :)




RE: Folding@Home
By PlasmaBomb on 4/13/2007 6:59:36 AM , Rating: 2
That's a good question, but the reason for the GPU client utilising 100% of a core is a DX9 implementation problem, rather than the core being maxed out transferring sufficient data.


RE: Folding@Home
By DancingWind on 4/13/2007 1:02:44 PM , Rating: 2
Em .. 48 shaders vs 320 STREAM PROCESORS
I asume that those 320 sp's are scalar units like in gf88 while a 'shader' in DX9 sence is a 4 slot/dimension vector unit (think RX1***/GF7*** and older) so actually you need 4 stream procesors to counter 1 full vector/pixel shader. ofcoures that's a cycle for cycle comaprison and at least on GF88 a SP is running at 1.35ghz thats more than 2 times the avg shader. :D
Still it leaves with 48 vs 80 shaders - an impressive ammount.
About DX10 and CPU .. dx10 WILL enable a drastic reduction of CPU load for graphics processing because with dx10 geometry shaders will allow to reuse geometric data for the next frame while up till now cpu had to redraw the geometry for every frame, o from grapchics rendering point along with other dx10 enhancments gpu's will be more independent from cpu. ofcource we will see how it will be in reality but dx10 looks very promissing.


RE: Folding@Home
By Kougar on 4/13/2007 5:22:20 PM , Rating: 2
I can't pretend to know the hardware good enough here, so I'll have to research this and see if I can figure it out. Good point, and thanks for the correction!


Interesting.
By Ard on 4/12/2007 11:09:29 PM , Rating: 2
Looks like it'll be a beast on paper but then it all depends on how AMD is defining their stream processors and how complex they are. In either case, it may be too little too late depending on what NVIDIA is packing in the 8800 Ultra.




RE: Interesting.
By Nightmare225 on 4/12/2007 11:17:48 PM , Rating: 1
Only time will tell whether it'll be worth it selling my GTX...


RE: Interesting.
By GlassHouse69 on 4/13/07, Rating: -1
RE: Interesting.
By Munkles on 4/13/2007 11:21:51 AM , Rating: 2
Glass,

While I am not one of them; some people can afford to spend the VERY top dollar for the very best, bleeding edge technology and not bat an eyelash.

Who's to say that he should always have the very best as long as he can afford to pay the price for it.


RE: Interesting.
By eman 7613 on 4/14/2007 3:49:24 PM , Rating: 2
and some of us do 3d modeling work, where a high end graphics card is cheaper then going out and buying one of the overpriced quadro or fireGL cards.


Seems like an ass kicker...
By Mithan on 4/13/2007 1:48:13 AM , Rating: 2
Seems like it could kick some major ass, but as always, how well it does in benchmarks is what matters.

I own a X1900XT but I think I could see myself picking one of these up with the .65nm refresh.

Either way, I am looking forward to games like Quake Wars or Crysis and I suspect to run them at good 60+FPS at 1920x1200, I will need a kick ass card like this. Right now though, I am still playing WoW, so its a bit of overkill :)




RE: Seems like an ass kicker...
By PlasmaBomb on 4/13/2007 6:55:34 AM , Rating: 4
A 0.65nm process is technically impossible.


Long PCB?
By TheDrD on 4/13/2007 9:45:46 AM , Rating: 2
Is that PCB huge or is it just me?




RE: Long PCB?
By johnsonx on 4/14/2007 11:06:09 AM , Rating: 2
it's the PCB that's huge... if you were huge you'd be a porn star


RE: Long PCB?
By Spoelie on 4/15/2007 9:21:49 AM , Rating: 2
now i regret replying to this thread, rating that comment up would be worth keeping my mouth shut


I think we can all agree on a few things...
By EarthsDM on 4/13/2007 5:42:26 AM , Rating: 3
1) None of us will know what the performance will be until we have benchmarks.
2) None of us will know how much the cards will cost, or what the availability will be.
3) All of you posters need some sleep! When was this article posted? 2:00 AM?

I'm getting some shut-eye.

ps

Can't fan boys on both sides be excited and happy? Even if ATI beats the stuffing out of NVIDIA, it just means lower cost cards from the green team.




By eman 7613 on 4/14/2007 3:51:17 PM , Rating: 2
yes, but then they would be WRONG, and that is such a crime -.^


Glad they're getting rid of some X's
By akyp on 4/13/2007 3:58:04 AM , Rating: 4
HD 2900XT is so much better than X1950XTX XXtreme Xfire Edition.

:-)




Level 505
By Bigginz on 4/13/2007 2:53:14 PM , Rating: 2
Has anyone noticed the Level505.com website is gone? It was rumored to be ATI's "secret" website. There was a 12 page article comparing the R600 to the 8800GTX and some other high end cards. IIRC, the R600 scored 18% better than the 8800GTX on most benchmarks. I think the 8800 had a slightly higher score on 1 or 2 benchmarks. But they said the R600 will probably do better once the drivers mature. There were not any pictures of the R600, but who really needs to see a 12 inch long red PCB to be impressed?

I have been waiting for this card since I found out Vista was going to use DX10. I'm an ATI fanboy so there is no way I'm buying a nVidia card. I am using a X1900XT right now and not in any hurry to replace it. It still works great for all the games I play. I bought FSX10 and was dissapointed with the graphics, so I hope the DX10 patch will vastly improve it.

Also, I commend ATI for waiting this long to release the R600. I think nVidia jumped the gun by releasing a DX10 card months and months before any DX10 games came out. I am looking forward to Crysis and some of the other games coming out.




RE: Level 505
By KernD on 4/14/2007 10:15:21 AM , Rating: 2
Your last comment is just bad. Should NVidia have waited to release the card when it was ready because there was no DX10 games? of course not, the card is great at DX9 for your information.

NVidia didn't jump the gun, the card was ready and in production, so it came out to market. It took years to develop that card so what were they to do, hold it back and put out an other GF7 refresh? No, the reality is that ATI is lagging behind, the X1000 series got out a while after the GF7 series and the 2000 series is half a year behind the GF8 series, thats a plus for NVidia, they will have a spring refresh out so that they stay on top, a little like ATI got the x1950 out to try and stay competitive.


320 Stream Processors!
By AggressorPrime on 4/13/2007 3:01:05 PM , Rating: 2
That is 2.5x what nVidia has. It looks like nVidia will have to play the GX2 card to get close to AMD.




RE: 320 Stream Processors!
By KernD on 4/14/2007 10:29:02 AM , Rating: 2
Thats is just PR, they don't have stream processor, it's similar to the XB360 graphic chip and it doesn't have any, it has 48 vector processing unit, but that chip actualy had 64 but one quad disabled to improve production, so guess what they must have done, they added one more quad, that gives 80 and how many float can 80 vec4 units process... 320. This is just PR, it can process at maximum 160 instruction per clock, 80 vec3 and 80 scalar, but still 320 floats.
Unless the units are vec4+1scalar, it would then be the 64 units. It's can crunch more data than a GF8800GTX but not all shader code is vec4, and the SU of the GF8 are clocked really high.


YAWN
By kdog03 on 4/13/07, Rating: 0
RE: YAWN
By LaGUNaMAN on 4/13/2007 11:26:35 AM , Rating: 2
Yeah, I could definitely relate. But on the bright side, it's better than hearing nothing.


its gonna rock fo sho!
By otispunkmeyer on 4/13/2007 4:12:05 AM , Rating: 2
but lets just wait a second and think

nvidias G80 has been out now what? 5-6month? for that time its had the top-o-the-range graphics market completely to itself, getting fat. at 6month old its getting on to a mid-life crysis! it needs to re-ivigorate itself....

its gonna get a shave, a sharp hair cut, a few applications of "just for men", some smart "wannabe architect" rimless spectacles and a new, hip&happenign wardrobe.

then its gonna go out and buy a motorcycle and an ipod.

the 8800 is due a spruce-up... unless ATi have aimed exceptionally high (and looking at the spec's id say thats what they are trying) theyre still gonna be surpassed by nvidia who have a 6month jump on them.

unless of course, nvidia have actually been sat there doing naff all for the last 5/6 months getting enormous off the profits and staging all night pizza/beer/gaming binges.




clock speeds?
By venny on 4/13/2007 6:15:27 AM , Rating: 2
so what are the confirmed clock speeds? it didnt say




What are ATI and NVidia thinking?
By Toebot on 4/13/07, Rating: 0
By Targon on 4/13/2007 11:52:45 AM , Rating: 2
Not every game runs better in a console environment compared to a PC. That's the real problem with many of these types of arguments. Now that consoles are up into the $500+ range, consoles start to lose their appeal.

When it comes to price, another problem that many don't see is the cost of memory. If you want a video card with 512 megs of memory, then you can expect the price to be less than a card with a full gigabyte of memory on it. You also need to look at the price and type of the memory. 1 gig of GDDR 4 will cost more than 1 gig of GDDR 3, which costs more than 1 gig of DDR-2 memory. In addition to this, just because AMD/ATI will be offering the HD 2900XT with 1 gig of GDDR 4 memory doesn't mean we won't see other video card manufacturers that offer an HD 2900XT with 512 megs of GDDR 4 memory(the lower card from AMD/ATI will have GDDR 3 I believe).

If all you play are console-type games, then yea, go for a console. But you don't want to play a game designed for a PC on a console because the controls just don't do justice to them. It works both ways.


An interesting tidbit over at the INQ...
By DEVGRU on 4/13/2007 11:06:14 AM , Rating: 1
A liitle something for those of you that don't read The Inquirer...

http://www.theinquirer.net/default.aspx?article=38...

"Meet AMD's 65nm R600 - the R650

Analysis Making a dynamic product mix

By Theo Valich: Wednesday 11 April 2007, 11:18

WHEN WE FIRST wrote that AMD is working on 65nm R600 chip, half the firm itself did not have an idea of what was going on.
We never wrote that 80nm GPU would be scrapped, but we did state that it would be quickly surplanted by a 65nm part.

Now it seems our old chum Fudzilla has been converted to our point of view and we can now finally go out and say what 65nm R600 is and what will happen with this chip.

In short, AMD's CPU manufacturing strategy will be introduced to the world of GPUs. This should result in more affordable and better performing parts. Even though AMD is not using its own manufacturing facilities or facilities that use AMD's own procedure called APM, some elements of it will be used to get flexible manufacturing with GPUs as well.

First, R600 is going to be more affordable than any GPU part after Radeon 9700Pro and 9800Pro. AMD wants to undercut current high-end price bracket by $100-150, so expect an 8800GTX performing part for the price of 8800GTS. This is not all.

The company is getting the 65nm part on line as soon as possible, as this will enable savings in power well within the 60-100 Watt range, depending on what part are we talking about.

As we already wrote months ago, AMD developed four completely different PCBs and the company wants to cover every possible demand from its partners. We have talked with mid- and high-ranked executives and they told us that AMD plans to bring its customer centric mantra to every aspect of business.

This also means redefining the way graphics and chipset wars are fought. And we have to say that we cannot wait to see the slaughter-fest between AMD, Intel, and Nvidia in 2009. Somehow, we feel that AMD is the best-positioned company, especially if the flexibility that we were told about actually ends up implemented across the board.

If 65nm high-end GPUs end up on boards for $300, $350, to $400 a new era will begin indeed. Some of our AMD sources claim that most price brackets are achievable, the only real limit is yield. If AMD gets great yields from the 65nm R650, company will ship not hundreds of thousands of high-end chips, but rather millions and millions of these chips, bringing prices down and redefining its GPU sales profile. µ"




By James Holden on 4/13/2007 1:33:55 PM , Rating: 1
Too bad it's fake.


well i liked the code name
By indianpunk on 4/13/2007 12:48:57 PM , Rating: 2
Well r600 was better a new beginning should be the criteria for amd i guess but the sticked to trewid ad tested anyways i am getting a new mobo with the onboard amd chip 690g so i guess this will be my next card anyways i'll have to wait more for midd level versions of it small pocket u c cant afford the full beasts




Added value
By hlper on 4/13/2007 1:17:03 PM , Rating: 2
I am surprised to see no one mention the potential for sound processing. Imagine that you could fit a top quality video card and sound card in only 2 slots. Think of that vacant PCI slot. Ah the possibilities... That takes me back.




...
By WileCoyote on 4/13/2007 1:27:00 PM , Rating: 2
Amazing how a 356-word news article can generate 7,500+ words of speculation.




By TechLuster on 4/13/2007 6:48:18 PM , Rating: 2
First R600 was supposed to have 64 shaders was called X2800...then it was 128 shaders and X2900...now 320 and HD 2900. First we thought it was an 80nm part...then it was supposed to be 65nm...then AMD apparently decided to ship a few 80nm parts and quickly replace them with 65nm.

First the 8600GTS was called the 8600 Ultra...then it was the 8600GTS...then instead of 64 shaders, suddenly it was supposed to have only 32.

People, listen up: these name, spec, price, release date *RUMORS* vary so wildly and change so frequently that you just can't expect them to have much, if any, relation to what's finally going to happen. So please STOP trying to draw definitive conclusions such as "OMG the X2975XTX^2 is going to PWN the Geforce 8900GTXS Ultra!!!111"




By scrapsma54 on 4/17/2007 11:47:29 AM , Rating: 2
Gddr 4 and the 512-bit pipeline knocks the bandwidth of R600 2x the amount Gtx flows out. plus with 2x the stream processors, r600 should be able to get great performance with its shaders indefinately. Bandwidth seems to be on top, but shader performance will be of question.




Are you kidding me....
By lompocus on 4/13/2007 5:45:05 PM , Rating: 1
Dang, why does ATI have to delay this by so much? There's driver problems, sure, but ca'nt it have come to market a week or so ago? And WHY oh WHY did this news come the DAY after I put together my new system w/ 8800 gts??? WHY DO YOU HATE ME!!!!1111

On a nicer note, who knows where I can sell my 8800gts 320mb? It now is teh suckzorz.




fake
By James Holden on 4/13/07, Rating: -1
RE: fake
By elmikethemike on 4/17/2007 9:20:33 AM , Rating: 1
It's funny the people that like to bad mouth the R600, especially since it's not even out yet. The only people that do are those that will never be able to afford one and those that were dumb enough to buy the first DX10 cards that appeared on the market, ie 8800GTX.

Face it, the R600 is going to be a good card. Get over it. Sorry, it's not my fault you didnt have the patience to wait for ATI's DX10 offering before you stupidly decided to buy the first thing available. And that, oh by the way, wouldnt even be needed until a year from it's original release date. Anyone who bought a GTX before ATI's cards debut should be kicking themselves, not bad mouthing something that's going to be better.


RE: fake
By GJM on 4/19/2007 4:24:10 AM , Rating: 2
If all the people that own a 8800GTX should kick themselves then so should all those who buy the R600 bfore nvidia releasse there 8900GTX. Why wait when you can have the best all the time simply by selling and upgrading, a bit like renting to upgrade you have to pay more so if you can afford it you do it. You can easily get left behind in the PC race if you simply wait for a better product, that happens all the time.


"We’re Apple. We don’t wear suits. We don’t even own suits." -- Apple CEO Steve Jobs

Related Articles
The AMD "R600" in Pictures
April 11, 2007, 11:24 AM
"G80" To Feature 128-bit HDR, 16X AA
October 5, 2006, 11:21 AM
NVIDIA "G80" Retail Details Unveiled
October 5, 2006, 12:39 AM
More ATI RV550 Details
July 31, 2006, 1:31 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki