Print 55 comment(s) - last by Shadowmage.. on Apr 8 at 4:02 AM

NVIDIA attempts to meet GTX 275 demand and compete with the Radeon 4890

Last week, NVIDIA launched its new GeForce GTX 275 video card to compete against the ATI Radeon HD 4890. The launch was originally targeted for April 14, and our sources indicated that there were only around 5,000 units available on launch day.

However, one of our sources emailed us some links last night to e-tailers Newegg and Mwave to prove North American availability of the GeForce GTX 275.

Another email DailyTech received claimed that NVIDIA has "shipped 10s of thousands of chips to our board partners. Our board partners are in mass production and they have already started to make these available through retail and e-tail channels."

"Timing of course, depends on the region. But clearly, the floodgates are open," the source elaborated.

Based on the GT200 architecture, the GeForce GTX 275 features 240 processor cores operating at 1404 MHz, 80 texture processing units, a 448-bit memory interface, and an 896 MB frame buffer. NVIDIA is positioning it between the GeForce GTX 260 and the GeForce GTX 285 in price and performance. Benchmarks have shown that it exhibits slightly better performance over the Radeon 4890 in most gaming applications.

Pricing on the other hand, favors the Radeon 4890. Using mail-in rebates, the graphics division of AMD has already lowered the price to under $230. Further price cuts are probably coming, as the RV790 chip that the Radeon 4890 uses is much smaller and cheaper for AMD to manufacture. The G200b chip that the GTX 275 uses has 1.4 billion transistors, while the RV790 only has 959 million. Both are manufactured on a 55nm process.


GTX 280

ATI Radeon HD 4890

GTX 275

ATI Radeon HD 4870

GTX 260 Core 216

ATI Radeon HD 4850

GTS 250

Stream Processors








Texture Address / Filtering

80 / 80


80 / 80




64 / 64









Core Clock








Memory Clock


3.9GHz GDDR5


3.6GHz GDDR5


1986MHz GDDR3


Memory Bus Width








Frame Buffer








Transistor Count








Price Point









Comments     Threshold

This article is over a month old, voting and posting comments is disabled

Doesn't matter...
By 7Enigma on 4/7/2009 9:54:37 AM , Rating: 5
In this round the 4890 seems to be the better buy for 19-24" display resolutions, and supposedly has very nice overclocking potential as well. I just built a gaming rig back in January with a 4870 512meg, but if I was building today would grab the 4890 over the 275 without a second thought.

RE: Doesn't matter...
By chrnochime on 4/7/2009 10:22:21 AM , Rating: 2
Got voted down by the Nv fanboys I suppose, even though your points seem to be valid.

On a separate note, Since it's only been a few days from when they said there were no cards, the only method of shipping I can think of that gets the cards here to the US that fast is Air Freight. Shipping by sea takes longer than a week IIRC for pallets traveling from China(boards manu location) to the States, where the re/etailer warehouses are. Are these cards that profitable to justify the air shipping?

RE: Doesn't matter...
By chrnochime on 4/7/2009 10:23:28 AM , Rating: 2
I mean "gets the cards here from China".

RE: Doesn't matter...
By teohhanhui on 4/7/2009 11:20:21 AM , Rating: 2
I think they can't afford to lose (looking at the price, they're already on the losing end).

RE: Doesn't matter...
By The0ne on 4/7/2009 1:56:22 PM , Rating: 2
1 week is optimistic :) We usually don't get our products until a few more weeks later :D

RE: Doesn't matter...
By hohowan on 4/7/2009 3:15:40 PM , Rating: 2
Asia to US ocean routes depend on carriers and forwarders, but generally you will *NOT* get anything even close to one week transit time. Normally 15-20 day transit times are more normal. Of course, depending on west or east coast unlading & customs clearance will tack on time as well. Don't you just hate the logistics people. :)

RE: Doesn't matter...
By mindless1 on 4/7/2009 11:29:32 PM , Rating: 2
What if "no cards" only meant the boat hadn't made it to port yet?

RE: Doesn't matter...
By mindless1 on 4/7/2009 11:38:15 PM , Rating: 2
Rough guesstimation is 10,000 cards won't cost a buck a piece to ship, so yes there's probably an extra buck of profit in getting them to market quickly while they'll sell at a price premium.

RE: Doesn't matter...
By Drexial on 4/7/2009 10:34:21 AM , Rating: 3
From what I understand they beefed up the insulation and leads a bit to handle higher voltage. So the overclocking potential should be quite good considering even the standard 4870 can easily overclock above where this card is clocked.

How ever some tests haven't really gone as well.

The 275 has also shown huge potential for overclocking as well. So the point seems moot.

It is for the most part a neck and neck. Buy the one that has the best price at the time and you will be set.

RE: Doesn't matter...
By Dianoda on 4/7/2009 10:47:40 AM , Rating: 5
I might lean towards ATI for a different reason, or perhaps more clearly, lean away from NVIDIA (though you make a valid point regarding overclocking potential). I will say that Anandtech made it pretty clear that in terms of the FPS the cards are pretty much even. My knock against NVIDIA is the company's largely incompetent management and a PR and marketing staff that make lawyers look honest by comparison.

I might give the company more credit if it didn't invent new products that are in fact not new at all (9800GTX+ vs. GTS 250 in particular), or put out press releases claiming something is news when there is either no new information or the truth has been stretched to the breaking point. I've never seen a company that felt it had to generate so much steam about itself. Face it, the company acts more like a junior high school girl with self-confidence issues than an organization run by professionals. I wish someone could tell the company to go wait out in the hall and then politely explain to them that it's alright to tell the truth or just keep its mouth shut when there is nothing of relevance to say.

I can honestly say I feel sorry for all the NVIDIA engineers for having to put up with the constant flow of BS that spews forth from NVIDIA's marketing department.

Oh, and whether or not GTX 275 availability really is getting better, I say thanks for the info NVIDIA, but I wouldn't have believed it if I couldn't confirm it independently by way of newegg (btw: as of the time of this post, @ newegg, only the EVGA model remains in stock...). See, even NVIDIA PR staff knows that no one will just "take their word for it" anymore...

RE: Doesn't matter...
By Proteusza on 4/7/2009 11:03:21 AM , Rating: 2

Nvidia's engineers are competent enough, its their marketers that make all of them look bad.

Not that ATI is completely innocent of marketing shennanigans, but Nvidia lately has been in the limelight for such antics.

RE: Doesn't matter...
By RamarC on 4/7/2009 12:49:04 PM , Rating: 2
yeah, nvidia's re-invention of the same product over and over is wearing thin. even their vendors are having a hard time differentiating the product line. a quick tour of newegg showed the following:

9800GT 1GB, $150
9800GT 512MB, $130
9800GTX+ 512MB, $135
GTS 250 512MB, $130
GTS 250 1GB, $170
GTX 260 896MB, $180

that's a lot of models packed into a $50 price range.

RE: Doesn't matter...
By The0ne on 4/7/2009 2:01:29 PM , Rating: 2
Agreed also. While my last few generations of video cards have been Nvidia I'm switching for the similar reasons. Personally, I think Nvidia is just cocky being in the lead for a while. The cocky-ness shows from what you've described. And like you, I can't stand that kind of manipulation.

RE: Doesn't matter...
By TO on 4/7/2009 1:41:05 PM , Rating: 2
So which card would you guys recommend for a beginner who is going to use the card for gaming without performing any overclocking, in a vista x64 environment, considering cost as a top priority.

RE: Doesn't matter...
By erple2 on 4/7/2009 6:21:58 PM , Rating: 2
As with everything, "It depends on what's important to you". Several sites frequently have things like "Video Card buying guides" peppered all over their sites. They'll make recommendations for you based on what is important to you. Ultimately, though, I suppose it boils down to price. IE how much money are you willing to spend? More money almost always translates to more performance. It's a question of diminishing returns - spend 20% more to get 20% better performance works up to a point (that point is about 200 USD). After that point, you're spending more than your return on performance.

I'd strongly recommend you look at actual review sites, NOT personal blogs or user reviews (which are about 99% useless - "My card is awesome!" "Your card sucks!"). Even then, you have to be a little bit careful. Some sites have a strong bias in one direction, other sites have biases in another direction. If you're lucky, you'll find a site that has no bias, or at least justifies their bias in some meaningful way ("we prefer these cards because ...").

I know that has had a recent "buyers guide" for video cards recently that seems to be fairly well done and is consistent with many other sites (their recommendations at various price points are consistent, at least).

Interesting times we live in...
By Proteusza on 4/7/2009 10:30:14 AM , Rating: 4
It seems that ATI is really hammering away at Nvidia - pricing their products lower than Nvidia wants to, because its chips are so much cheaper to produce. Its good for gamers, and will hopefully lead to better next generation cards (which we hear are just around the corner).

One thing though - the chart is a little misleading. The ATI cards have their memory clocks shown as the effective rates - ie 3900MHz - while the Nvidia's dont. This is even the case with the 4850, which uses GDDR3 memory, so its effective memory clock rate is simply double its actual clock - just like all of the Nvidia cards which only show the actual clock and not the effective.

Also, if you are going to include the number of shader processors on each card, please also include the shader clock. It makes a comparison between ATI and Nvidia graphics cards more meaningful.

I'm quite fine with my 8800GTS 640 right now, but if I had to upgrade it would probably be a 4850 or 4870, depending on price and power issues. Cuda and PhysX dont interest me - PhysX just hasnt gained enough market penetration. Hopefully OpenCL takes off.

Availabilty could only improve...
By Amiga500 on 4/7/2009 10:42:12 AM , Rating: 2
From what I gather, it would be virtually impossible for it to have got worse!!!

Improvement in availability does not equate to easily available!

By Mojo the Monkey on 4/7/2009 12:27:15 PM , Rating: 2
I see at least EVGA available for immediate purchase on the egg right now...

By FXi on 4/7/2009 10:10:23 PM , Rating: 2
What I'm looking for are the single board 295's. Those are going to be very handy for both aftermarket air coolers as well as watercooling. Current 295's are good, but a single board will be a big improvement. So I have to hope that if 275's start appearing in more quantity that the improved 295 can't be far behind.

By Hacp on 4/8/2009 3:05:17 AM , Rating: 2
Thats the only thing keeping ATI alive right now. Once Nvidia makes a chip that can handle it, ATI's offerings will lag behind.

By Amiga500 on 4/7/2009 10:46:38 AM , Rating: 4
If you factor in the PCB design, CUDA support, Physix support, and engineering prowess

PCB design - is that the same manufacturer whose PCBs the kept breaking a while back?

CUDA - Ever hear of OpenCL? The free alternative to CUDA? The alternative that will be used by many more companies unwilling to pay unnecessary royalties?

PhysiX - Show me 5 games it is worthwhile on.

Engineering prowess - Eh? The same engineering prowess that caused over 75% of vista crashes?

Don't make me laugh.

By Amiga500 on 4/7/2009 11:26:56 AM , Rating: 3
1. Nvidia have had problems far more recently than any other. It would be extremely poor judgement to assume they will not have issues in the future.

2. Oh... personal opinion eh?

3. "Worthwhile on" - not what Nvidia tell you it works on.

4. Why reference the most recent MS operating system crashes? Perhaps because a new one is due out in the expected lifetime of any new purchase GTX275. Therefore support of any new software is worth considering.

Your bias blinds you.

By Amiga500 on 4/7/2009 11:51:56 AM , Rating: 5
You are right, my opinion is slightly biased toward superior engineering and superior technical merit. Your lack of understanding in those grounds only means you are throwing your own hard earned money away, well, to each his own.

Thats good. You keep lecturing the guy with the doctrate in engineering about technical matters.

Good stuff kid. Now get back to your homework... what is it today? Advanced crayon?

By luceri on 4/7/2009 12:56:35 PM , Rating: 3
Just look at the last picture from this page: If you understand hardware, you would just see that you aren't getting much from ATI. Just because the benchmarks FPS is about the same between Radeon HD4970 and the GTX275, doesn't mean that they are the same "grade". If you factor in the PCB design, CUDA support, Physix support, and engineering prowess, the Radeon RV770 isn't even in the same league.

Err. Are you trying to say because the PCB and die are larger that you're getting more from nvidia? you referenced a picture and size, and think you know what you're talking about?? wow.. Back to engineering 101 with you. Learn the basics before assuming, and I don't need to mention what happens when you assume, you make an.. Yeah. Congrats.

So yeah, NVidia is the big bad die up against the smaller but higher frequency ATI, back to oldschool it seems. However, ATI is probably in front here sorry to say. ATI engineering is bad? Proportionality friend: 205mm2 (rv870), 137mm2 (rv740), 68mm2 (rv810). The ratio of units must be in proportion to power envelopes for frequencies now and in the future in adjacent performance sectors, build the smallest 256-bit you can, the memory controller on trickle-down 128-bit and 64-bit chips will be compensated by trickling down of faster memory, and in doing this, will allow for higher frequencies to compensate on the high end if necessary.

The frequencies can be cranked out from smaller die and compete with architectural additions in which the clockspeed is up against. It's a modern fundamental change and the future. Unfortunately the cost of this is your power bill, but it works. Before you pretend you know anything about engineering and talk about how this is a brute force method a la pentium 4 Northwood/Prescott etc, do some research and learn that the TDP's would be roughly the same per performance. This isn't a mistake on ATI's part.

A GTX285 at stock would likely require a rv790 @ ~1100mhz. The original GTX280 has a 236W TDP, a 1050mhz rv790 theoretically would have a tdp of 235W. On the flip side, at GTX285's TDP, you'd end up with a rv790 @ 910mhz, which would compete well with the original 280.

GTX280 TDP ~= rv790 tdp @ GTX285 performance
GTX285 TDP ~= rv790 tdp @ GTX280 performance

ATI has a solid design that's built economically sound, with the ability to crank the voltage/frequencies if necessary to compete with the "ultra high end" competition. We'll temporarily just forget about how Nvidia and its partners have been roughly breaking even or turning a loss on mainstream parts for the sake of marketshare. Is that because of their superior engineering prowess? Building a behemoth that's not scalable and too expensive to make mainstream? Yeah, okay. Great PCB design!

I don't need to work for ATI to know that a) You're not knowledgeable with engineering or hardware by any means beyond putting a computer together and knowing what the parts do (because you obviously don't know how they do it). And b) Your opinion about ati's engineering prowess and PCB design is uneducated and based off assumptions (remember to try to stear clear of these! Remember what they do!!).

I'm still dumbfounded by the fact you referenced a picture of the two cards and tried insinuating you get more from the larger Nvidia PCB+Die... Why don't we throw an old 2900xt into that picture, it'll dwarf them both. Is that visibly getting more? Come on...

Your physix arguments -- Okay, I'll let you win this one despite the fact I think it's worthless once openCL matures.

Your best argument in my opinion would have been TWIMTBP with NVidia's marketing arm paying developers to design games to play well with their platform. Works well enough that when ATI gains an edge from a patch in these games the patch gets pulled almost immediately.

You also think ati/amd is going to be gone?!? At least they're turning a profit on their graphics cards (due to better engineering having a design that's scaleable, I might add). Regardless of this if you're referring to their CPU division, Yeah they're not doing extraordinarily well, but Intel won't let AMD die. It's not in Intel's best interests. Worst case scenario: AMD gets bought out by IBM (unlikely) and IBM expands to compete more directly with Intel in the CPU market, and keeps the profitable ATI division since it's the brightest spot AMD's got. Exactly what happens is more debateable, but I can almost guarantee you AMD/ATI is not going to disappear anytime soon.

By wuZheng on 4/7/2009 10:58:44 AM , Rating: 2
engineering prowess

wow... thats pretty arrogant of you to presume to be able to say that for a company you probably don't work for.

Furthermore, if you really understood the hardware engineering process, maybe you wouldn't have overlooked why exactly the RV770 chip has been successful. Good chip design is based on many factors, I suggest you read Anandtech's articles documenting the RV770's design process before jumping to your own conclusions about "engineering prowess."

Btw, CUDA and PhysX are for the time being "just for show" technologies that bring very little value to the average end-user. Software can easily be replicated by another competing firm, or even potentially done better.

By wuZheng on 4/7/2009 11:28:55 AM , Rating: 2
If you don't work for AMD yourself, then you have no right to critisize my opinion on AMD's engineering prowess.

How does that logic work? I merely commented that you were being presumptuous to assume to be able to determine that nVidia has the definite edge in "engineering prowess." And again, you clearly don't understand the entire concept of engineering. Engineering involves design, yes, and nVidia does very good design, as does ATI. However, it also involves designing around conditions in the market. ATI positioned its chip to be able to be produced in high volume with a very high yield. Cost/benefit is very much involved in the engineering process. To be able to design logic with no constraints is something a high-level university student could do. To design logic within constraining boundaries is effective engineering. Something ATI has done very well with the RV770.

copy and paste

So the stream processors in the GT200 series are radically different than the ones in the G80 design? Thanks for playing.

By Amiga500 on 4/7/2009 11:54:32 AM , Rating: 5
It is much much harder to design, simulate, and manufacture a functional 1.4 Billion transistor chip(GT200) than a 965 Million transistor chip(RV770).

When the 965 million trans part performs roughly on par with the 1.4 billion trans part - I would say to the team that designed the smaller component - BRAVO.

A more efficient design whatever way you want to cut it.

But you, not actually being an engineer, wouldn't understand.

By Natfly on 4/7/2009 12:30:53 PM , Rating: 2
ATI cheats with DDR5 bandwidth on a technically weaker 256bit bus.

Ok now you sound like you are just trolling. Using more expensive DDR5 allows higher performance with less power, less heat, and on a smaller bus. If nVidia products are engineered so much better than ATI why are they using old DDR3 tech?

By Proteusza on 4/7/2009 12:52:20 PM , Rating: 2
The troll fails again.

Tell me you arent silly enough to believe ATI manufactures its own memory? Everyone - and that includes Nvidia - uses chips manufactured by the likes of Samsung and Hynix. ATI and Nvidia have never manufactured their own memory.

Hence, it doesnt matter whether ATI or Nvidia want to go to GDDR5 first - they just need to get Samsung and Hynix to produce it for them. And since GDDR5 has been adopted for mass market, by a little product called the 4870, that means that Samsung/Hynix are turning a profit on it! Isnt business wonderful?

Please troll, do some more of that learning stuff so you dont embarrass yourself anymore.

By Natfly on 4/7/2009 1:40:30 PM , Rating: 2
nVidia has about twice the market share of ATI, but neither are small by any count. ATI managed to have tens of thousands of 4890s ready for the launch.

Plus wouldn't it be nice to let ATI test the fabs and capacity first as a lab rat?

If anything nVidia should step up testing on their end, maybe they would have caught that bad solder/underfill problem that plagued so many gpus.

DDR5 hasn't hit the necessary price and volume inflection point to be adopted as mass market memory type.

Wrong. It clearly has, judging by the success, scale, and profit ATI is making off of the 4870 and 4890.

By Proteusza on 4/7/2009 12:32:41 PM , Rating: 2
Oh please, you arent an electrical and software engineer anymore than I'm an astronaut. You're just a troll.

Please - do you really think the 400 million extra transistors in GT200 are for the wider memory bus? Dont be stupid.

ATI "cheats" with DDR5? Ha ha ha, now its pretty obvious that you are a troll and a liar - how is it cheating? So ATI spends more money on memory, Nvidia spends more money on memory interfaces. Who is cheating? If you are an electrical engineer, you will know that the reason ATI doesnt use 512 bit interfaces is because of the complexity required on the PCB - you need traces to route that information around the PCB. More complexity = higher chance of failure. ATI gets round that by using a narrower bus, but using GDDR5 memory to provide higher bandwidth.

Fact of the matter is, a single GT200 is faster than a single RV770. Even ATI wont deny that. But the point is, ATI was able to price the 4870 at a point that made the GTX260 obsolete overnight. Thats because its an efficient architecture, and not too expensive for ATI to produce. Read Anandtech's original 4850 and 4870 review. But I guess in your selective memory, you have forgotten that ATI's good products forced Nvidia to drop their prices to compete. So when you sit using your Nvidia graphics card (if you can afford one), think of this - the reason you paid as little as you did for it is because ATI forced Nvidia to drop their prices by releasing their cards for cheaper.

But its quite clear that you are either A) a troll who knows all this but gets off arguing with random people on the internet, or B) a die hard fanboy who simply wont listen. Take your pick.

By luceri on 4/7/2009 1:16:18 PM , Rating: 2
Your facts are terrible on this one. 4870 is turning a profit. What isn't turning a profit is Nvidia's mainstream because they built a beast and can't scale it down. What you perceive as "superior engineering prowess" is costing Nvidia money. IN my opinion, that's poor engineering design. The price war is innitiated because ATI can afford it, they still turn a profit and they know nvidia's structure can't compete at the same prices, so Nvidia is forced to sell at a loss. Like you said, "Anyone can sell a dollar bill for 3 quarters, but that person/entity must be stupid." -- Thanks for validating our point that you don't know what you're talking about.

By Natfly on 4/7/2009 1:17:55 PM , Rating: 2
AMD is so desperate that they are willing to sell the Radeon HD4870 below cost.

Source? How about this, AMD is making so much money off the 4870 they tried to lower the price to gain market share, but their partners didn't want to because it is selling so well.

C)A person who is sick of illogicality of the choices some people make ...

Oh the irony.

By ClownPuncher on 4/7/2009 1:48:06 PM , Rating: 2
The fact that the GTX275 are reaching 90c during load makes me choose the 4890

By Subzero0000 on 4/7/2009 10:21:00 PM , Rating: 1
So... tshen83 chose (B) "a die hard fanboy who simply wont listen".

You quote only the sentence that is advantage to your point - "Fact of the matter is, a single GT200 is faster than a single RV770. Even ATI wont deny that..."

But ignore completely on "Thats because its an efficient architecture, and not too expensive for ATI to produce."

geeze... I suppose no one could ever win an arguement over you, 'cus your selective memory is really impressive.

By wuZheng on 4/7/2009 12:50:58 PM , Rating: 2
The extra transistors on the Nvidia chip is used to support a much wider memory bus 512bit vs 256bit on the RV770, and custom CUDA and Physx support

Right on the fact that a wider bus = huge transistor count. Wrong on the fact that the hardware has ANY customization to complement an API. It is more likely that CUDA was designed to take advantage of the inherent programmability of nVidia's SPs, NOT the other way around.

ATI once tried to make a wide bus, ingenious design, that ring bus.... however the space that thing took up on the chip and the thermals were well... we all know what happened there. This time around nVidia has tried their hand at making a chip with a wide bus, and they've had more success with it than ATI. However, they still made a large, expensive, and very hot chip.
You are confusing volume economics with engineering prowess. Yes ATI has higher yield right now because it is a simpler design. It is much much harder to design, simulate, and manufacture a functional 1.4 Billion transistor chip(GT200) than a 965 Million transistor chip(RV770). Just because ATI went "cheap" doesn't mean it is a better engineering effort. In fact the opposite is true.

I completely disagree here. First of all, efficiency of design in both performance and economic cost is encompassed in your so-called "engineering prowess." Second, you mean to tell me that a simpler design that performs the same as comparable parts, is cheaper to manufacture, and has a higher production yield is BAD engineering?

You don't know me. I am a electrical and software engineer.

No, we don't know you, and you may be either or both. However, what I can say about you is that your bias is pretty intense and you have to tone that down. Any engineer at any level (student, professional, doctoral, or otherwise) would appreciate simplicity in design, and that large, complex, and technically impressive designs are just that, a big show of technical prowess, as opposed to good engineering design.

By Lightnix on 4/7/2009 1:09:15 PM , Rating: 3
"The extra transistors on the Nvidia chip is used to support a much wider memory bus 512bit vs 256bit on the RV770, and custom CUDA and Physx support.

No, it's not. The R600 core had a 512-bit memory bus (2900 XT), it does have implications on transistor count, but not nearly as much as you imply. The main complications there come in terms of PCB traces and PCB real estate (you need more physical memory chips). Then factor in that the RV770 core supports: GDDR3, GDDR4, and GDDR5. GT200 only supports GDDR3 (i.e. its memory controller doesn't physically support GDDR5)

Again, CUDA, not really what is affecting transistor count - evidenced by the fact that RV770 also supports GPGPU programming through the ATi stream SDK, and soon OpenCL (as will Nvidia, and probably even S3 and Intel chips). There are bits here and there that help with GPGPU applications such as cache in the core clusters (or SIMD arrays on RV770), but not enough to make up an extra 500 million transistors (more or less). PhysX doesn't affect transistor count whatsoever as its acceleration on GeForce graphics cards is entirely CUDA based, there's nothing hardware specific in it - evidenced by the 8800 GTX, a card that came out in 2006, supports PhysX acceleration despite Ageia being acquired by Nvidia in 2008.

You could argue that Nvidia's GT200 architecture's transistor count was bloated by the fact that they put specialised double precision floating point units on their core. Of course that would be neglecting the fact that the Radeon 4800 series are capable of doing double precision ops with all their shaders.

Really it's more that the DirectX 10 more-or-less specifications require programmable shaders than anything else. Well, I say more or less, DirectX 11 actually requires GPUs conform with Microsoft's DX11 compute shader language (between this and OpenCL, both CUDA and Stream are essentially going to be worthless to mere mortal consumers).

Also, GDDR5 isn't as expensive as you might expect it to be. RV740 is coming out next month with GDDR5 memory (albeit on a 128-bit bus), and that's aimed at the $99 price point (to compete head to head with the 9800 GT). I think you'll also find some evidence that GDDR5 isn't so expensive by comparing the prices of 4870 512MB cards to 4870 1GB cards, the difference in price is pretty minimal at the moment.

I would also be very surprised if the GTX 200 series got any real boost from GDDR5, they have huge memory buses as you said, there's little gain to be had from adding more memory bandwidth as they're already swimming in the stuff. They're more limited by the horsepower of the core (though I'd hesitate to call either GT200 or RV790 'limited').

Also, if GDDR5 is cheating, it's of the best kind. It brings better performance with less complex PCB's, which ultimately is better for consumers. ATi already tried a 512-bit bus, it didn't go so well (2900 XT) - and this round they're all the better for not using one. The cost savings were passed onto the consumers at launch, and then even more cost savings were passed onto the consumers on the other side of the fence as Nvidia rapidly took an axe to their pricing structure. GDDR5 has done wonders for this generation as a whole.

But hey, y'know, I'm happy with my card made of dream parts which was obviously cobbled together by magical pixies in ATi's meadows. I mean it's totally ATi who are rumoured to be losing money on parts and not Nvidia with their gargantuan cores and huge memory buses, right?

Anyway, your move.

By ClownPuncher on 4/7/2009 1:35:10 PM , Rating: 2
The 512 bus width costs more than the DDR5, how is it "cheating" to be more efficient?

By ClownPuncher on 4/7/2009 1:33:18 PM , Rating: 2
Buying video cards is nothing like buying beef, you fail at engineering basic knowledge AND analogies. A better analogy would be to say buying a video cards is like buying a sports car. One manufacturer is offering a cheaper and quicker car with less production cost, the other is offering a car with nearly the same top speeds, but costs more to manufacture thus costing the consumer more.

By Shadowmage on 4/7/2009 1:43:35 PM , Rating: 1
Well how's this for you:

NVIDIA has already acknowledged that AMD has a superior product this round in all areas: PCB design, as well as GPU design. They have already made drastic changes to their product lineups to account for the new RV770 threat.

Don't think I'm qualified to speak? Well consider this: I used to work for NVIDIA as an engineer.

By dubldwn on 4/7/2009 3:02:45 PM , Rating: 2
NVIDIA has already acknowledged that AMD has a superior product this round


By Shadowmage on 4/8/2009 4:02:08 AM , Rating: 2
Here's the publicly available information:

Remember also that all NVIDIA communication to the outside world is under a layer of marketing speak.

Obviously NV is trying its best to recover from the situation, but they're playing a game of catch-up right now and they know it (hence the stream of chips such as GTX260-216, GTX270, GTX285, GTX295 and rebrands).

By Reclaimer77 on 4/7/2009 3:45:49 PM , Rating: 2
Btw, CUDA and PhysX are for the time being "just for show" technologies that bring very little value to the average end-user.

So was DX10 last year.

I'm just playing devils advocate here...

By Proteusza on 4/7/2009 10:57:57 AM , Rating: 2
Did you even read the conclusion on the review you linked to?

If you think I'm gonna pick for you when things are this tight, though, forget it. This one is too close to call.

Stop being such a fanboy and grow up a little.

By ClownPuncher on 4/7/2009 1:16:50 PM , Rating: 1
First, its a 4890 not a 4970, second, its RV790 not RV770.
Now, whats this about engineering prowess? You mean the GDDR5, the 1ghz overclocks, the lower temps, the improved power draw?

Nobody gives a piss about CUDA, Physx is only slightly useful in 1 whole title (Mirrors Edge). They are currently gimmicks to draw the uneducated buyer in. They may have value later on, but right now it's crap.

Even so, both the GTX 275 and the HD 4890 are nice cards, for similar prices.

By Parhel on 4/7/2009 4:50:42 PM , Rating: 3
You rather ineptly fail to mention the biggest advantage of Nvidia's 275 over the 4790, the heat and power use. Even with the more powerful Nvidia GPUs, they have lower heat and power draw than the ATI. That's a big advantage.

I think you've got that backwards.

The 275 is the hottest running card in recent memory. Scary hot. A few degrees away from boiling water hot. Bursting into flames killing everyone in the room hot. Well, you get the picture.

And power consumption is slightly in favor of Nvidia, but more or less a toss-up. 31 fewer watts at idle, 26 watts higher at load.

Performance, especially if you only look at the trustworthy sites, is largely a toss-up as well. No big wins for either card, just a few frames either way depending on the game.

The rest of my system is heavily overclocked and quiet, so the heat is a deal breaker for me. I wouldn't even consider the 275. I might buy a 4890, but the for only $90 more the 285 is mighty tempting ($310 after rebate on NewEgg.)

"What would I do? I'd shut it down and give the money back to the shareholders." -- Michael Dell, after being asked what to do with Apple Computer in 1997

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki