Print 30 comment(s) - last by lplatypus.. on Mar 7 at 5:37 PM

Say hello to the new Quadro FX 4600 and Quadro FX 5600

NVIDIA today released three new Quadro products – the Quadro FX 4600, Quadro FX 5600 and Quadro Plex VCS Model IV. The new Quadro FX 4600 and Quadro FX 5600 feature NVIDIA G80-derived graphics processors tweaked for CAD/CAM and visualization applications.

With the G80-derived graphics processor, the new Quadro FX 4600 and Quadro FX 5600 have 128-unified shader units. The new Quadros are also compatible with CUDA technology too, NVIDIA’s answer to AMD’s Stream Computing technology. DirectX 10 compliance and support for shader model 4.0 are also feats of the new Quadros.

Differentiating the Quadro FX 4600 and Quadro FX 5600 is the amount of memory. The lower Quadro FX 4600 features 768MB of video memory – similar to NVIDIA’s GeForce 8800GTX. A whopping 1.5 GB of video memory is available on the Quadro FX 5600; besting ATI’s 1GB of graphics memory endowed FireGL V7350. Both Quadros have 384-bit memory interfaces though.

Although NVIDIA announced the Quadro Plex VCS Model IV at the same time as the Quadro FX 4600 and Quadro FX 5600, there are no details of the Quadro Plex VCS Model IV in the press release or Quadro Plex VCS product pages. However, expect the Quadro Plex VCS Model IV to feature the new Quadro FX 4600 or Quadro FX 5600 graphics processors.

NVIDIA prices the new Quadro FX 4600 at $1995 and the Quadro FX 5600 at $2999.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Joke Question...
By theapparition on 3/5/2007 11:37:49 PM , Rating: 5
The Quadros are not clocked slower than the GeForce lines, as someone suggested (reference designs). The workstation cards from Nvidia differ only in three areas. They support hardware based anti-aliased lines which smooth the "jaggies". This has almost no benefit for games since most objects are texture wrapped. The quadro's also support hardware overlay clipping. For example, multiple windows overlapping. Once again, something that has almost no value for gaming. The third feature is driver support. It takes a lot of testing to certify the drivers to work with a specific workstation application. This effort is passed onto businesses by higher prices. If Oblivian is locking up at a specific scene, you can complain all you want about your GeForce across the internet and Nvidia will fix "when they get to it". If a cad model is crashing because of a hardware problem with your quadro, Nvidia has a team of engineers working on a fix for the next day. You pay for that service.

The short version is there is not much difference between the consumer and workstation cards. Games will run a "tiny" bit faster than the workstation bretheren for a similar series of card, and cost a lot less. However, for large cad assemblies, visualization, or rendering, the extra price of the quadros can be justified and performance can be signifigantly improved over the Geforce line.

ATI has basically the same model for their Radeon/FireGL series.

RE: Joke Question...
By Ecmaster76 on 3/6/2007 12:17:17 AM , Rating: 2
That and the precision of all operations are higher. Close enough only works in games, not for pro stuff.

RE: Joke Question...
By Alpha4 on 3/6/2007 12:42:35 AM , Rating: 2
Thanks a million for the clarification. The explanation makes a lot of sense. In the case of GeForce cards being used for CAD modeling though, do they wire frames remain aliased or is anti-aliasing forced on a software level? Or does it depend on the app?

RE: Joke Question...
By theapparition on 3/6/2007 9:46:22 AM , Rating: 2
It depends on the app, but if the app implements AA lines, it would be software based, hence the potentially HUGE reduction in performance. Some apps will be drawn(<--pun intended) to their knees with a Radeon or GeForce. Others only have a small performance hit. You'd have to look at your application to see if there a cost/benifit ratio. While the FX5600 may get the majority of "press", at $2999 its hard to justify for anything but the highest performance applications. Personally, I'm going with the FX4600 line. The extra 1000 for 768MB of memory gets me almost no performance increase for my applications.

Just an aside, back in the GeForce2/Quadro2 days, all you had to do was swap resistors on the board to turn a Geforce into a Quadro. For the GeForce4/Quadro4 days, the register was set in the chip package, however, a hacked driver had enabled all the quadro features. I had a few boards from PNY and compared 2. One was a Geforce4600, the other a Quadro950. The part number on the pcb's was the same, and they had the same components on them, which shows you how related they were. I don't know of anything to turn a 5/6/7/8x series into a Quadro.

Most of my experience for workstations has been with Nvida because they simply had much better support for OpenGL than ATI for my applications. But you have to look at what you use to make the right determination. It's hard to go wrong with either these days! Long live competition.

RE: Joke Question...
By leidegre on 3/6/2007 3:21:30 AM , Rating: 2
Here's an article at Wikipedia about the ATI FireGL cards,

I would guess that the diffrences between the Quadro/GeForce cards are roughly the same.

As for the cost, I think someone made a point about customer care, which would mean that these cards come with some serious customer support for the workstation applications, hence the asked price, as well as the obvious smaller demand.

"There is a single light of science, and to brighten it anywhere is to brighten it everywhere." -- Isaac Asimov
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki