Print 50 comment(s) - last by carage.. on Mar 20 at 10:12 AM

AMD packs next-generation AVIVO high-definition video decoding features into its value and mainstream lineup

AMD’s next-generation value and mainstream products are set to bring DirectX 10 and high-definition video playback to the masses. Although AMD is late to the DirectX 10 game, the upcoming RV610 and RV630 feature second-generation unified shaders with shader model 4.0 support. AMD has remained hush over the amount of unified shaders and shader clock speeds of its next-generation value and mainstream products though.

AMD is prepared to take on NVIDIA’s PureVideo HD with its next-generation AVIVO video processing. AVIVO is receiving its first upgrade since its introduction with the Radeon X1k-series with the RV610 and RV630. This time around, AMD is integrating its Universal Video Decoder, or UVD, for hardware decoding of H.264 and VC-1 high-definition video formats.

AMD’s UVD expands on the previous generation’s AVIVO implementation to include hardware bit stream processing and entropy decode functions. Hardware acceleration of frequency transform, pixel prediction and deblocking functions remain supported, as with the first generation AVIVO processing. AMD’s Advanced Video Processor, or AVP, has also made the cut for low power video processing.

Integrated HDMI with support for HDCP joins the next-generation AVIVO video processing for protected high-definition video playback. Unlike current HDMI implementations on PCIe graphics cards, RV610 and RV630 integrate audio functionality into the GPU. Instead of passing a PCM or Dolby Digital signal from onboard audio or a sound card, RV610 and RV630-based graphics cards can directly output audio – removing the need of a separate sound card.

RV610 and RV630 support PCIe 2.0 for increased bandwidth. Native support for CrossFire remains, as with current ATI Radeon X1650 XT and X1950 Pro products. AMD will also debut RV610 and RV630 on a 65nm manufacturing processor for low-power consumption. Expect RV610 products to consume around 25 to 35-watts. RV630 requires more power at around 75 to 128-watts.

AMD currently has four RV610 reference designs based on two RV610 variants – Antelope FH, Antelope LP, Falcon FH and Falcon LP reference boards and RV610LE and RV610PRO GPUs. Antelope FH and Antelope LP are similar; however, Antelope LP is the low-profile variant. Both reference boards feature 128MB or 256MB of DDR2 video memory clocked at 400 MHz. Antelope boards employ the RV610LE, feature passive cooling and consume less than 25-watts of power.

AMD’s Falcon LP reference board is another low-profile model with 256MB of GDDR3 memory clocked at 700 MHz. Falcon LP takes advantage of a DMS-59 connector for dual video outputs while maintaining a low profile form factor. The Falcon LP reference board employs active cooling to cool the RV610LE or RV610PRO GPU.

AMD Antelope FH, Antelope LP and Falcon LP only support software CrossFire – all lack support for the CrossFire bridge connectorHKEPC confirmed this CrossFire setup in a recent report last week.

The Falcon FH reference board is the performance variant and designed for the RV610PRO ASIC with 256MB of GDDR3 video memory. AMD estimates board power consumption at approximately 35-watts, though it is unknown if Falcon FH boards will feature active or passive cooling. Falcon FH is the only RV610 reference board to support AMD’s CrossFire bridge connector for hardware CrossFire support. Falcon FH also features VIVO capabilities.

RV630 has three reference board configurations – Kohinoor, Orloff and Sefadu. Kohinoor is the high-performance RV630 variant and features 256MB or 512MB of GDDR4 memory. It also features VIVO and dual dual-link DVI outputs. However, it consumes the most power out of the three RV630 reference boards, requiring 121-watts for 256MB models and 128-watts for 512MB models.

Orloff falls in the middle with 256MB of GDDR3 video memory. Orloff lacks the video input features of Kohinoor but supports HDMI output. AMD estimates Orloff to consume less than 93-watts of power. Kohinoor and Orloff support PCIe 2.0 and native CrossFire. Kohinoor and Orloff require additional power via PCIe power connector though.

Sefadu falls at the bottom of the RV630 lineup and features 256MB or 512MB of DDR2 video memory. HDMI remains supported, as with Orloff though. Power consumption is estimated at less than 75-watts, and does not require the additional power supplied by a PCIe power connector. All RV630 boards feature 128-bit memory interfaces and occupy a  single-slot.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By tkSteveFOX on 3/13/2007 3:10:04 AM , Rating: 2
I was hoping for a 256bit interface in the mainstream GPU`s.This 128bit bus is a pain in the ***.And so a Mainstream card will have 128bit bus and a high end one a 512bit one.Thats hardly fair at all.The performance differance will be huge if that`s the case.

RE: Darn
By otispunkmeyer on 3/13/2007 3:57:25 AM , Rating: 4
im with you here

nvidia have pushed the boat out to 384bit on the high end, AMD are likely to push that even further to 512bit with their highend, yet the mid-range cards get no such increases. 128bit bus is just soooo old skool now its a joke...we should of moved on. even if its just to a 192bit bus.

i think the reasoning behind it though is that its cheaper to purchase super fast 1Ghz GDDR3 (2Ghz effective) and 1ghz+ GDDR4 than it is to build a wider bus equipped with run of the mill GDDR3 700-800Mhz stuff.

8600GTS will have about the same bandwidth as the 6800GT did when it came out and the same as 7800GT so mid-range bandwidth is about that of the last 2 gens high end.

RE: Darn
By otispunkmeyer on 3/13/07, Rating: 0
RE: Darn
By sbanjac on 3/13/07, Rating: -1
RE: Darn
By FITCamaro on 3/13/2007 6:20:19 AM , Rating: 2
You can get a 7600GT these days for $100. A 7900GT or X1950Pro for $200.

RE: Darn
By crimson117 on 3/13/2007 9:39:39 AM , Rating: 2
Actually you can get an x1950pro 256MB for as low as $150 today after rebates. Best bang for the buck on the market right now, imho.

RE: Darn
By TechLuster on 3/13/2007 2:54:28 PM , Rating: 2
I agree--that's a great deal. But I think you can do even better. I just picked up an EVGA 7900GS KO (500/1380) for $140 with rebates. Sure, at stock speed the x1950 pro is faster, but with the factory OC (and with Anandtech's review model reaching an ADDITIONAL 20% overclock), I think this may barely edge out the ATI card.

But in any case, the real question is how these two cards are going to compare to G84 and RV630. At $150, the later cards are only expected to be sporting 1.4GHz GDDR3 (same as the x1950pro and 7900GS) but with only a 128 bit connection. This fact is the reason I felt comfortable upgrading now, as opposed to waiting. (I'll be sticking XP for awhile, so DX10 doesn't matter to me.)

RE: Darn
By shabby on 3/13/07, Rating: 0
RE: Darn
By Russell on 3/13/2007 1:48:35 PM , Rating: 2
It's a card designed for HTPC's and such, not for gaming systems. 64-bit is fine for that application.

RE: Darn
By Flunk on 3/15/2007 7:10:47 AM , Rating: 2
Would you use a low end card for gameing now? (x1300, cough). Then why would you think a new low-end card would be decent for gaming?

RE: Darn
By carage on 3/20/2007 10:12:04 AM , Rating: 2
Unfortunately I do...
I was duped into buying a slim Dell desktop.
When I first received it I was amazed by its size.
Now I only agonize for not choosing its larger cousin.
Looks like I am going to be stuck with the low-profile 1300 Pro for some time.
I have already used it to play Supreme Commander, NBA Live 07, and the C&C3 Demo.
NBA Live 07 actually looks decent. C&C3 doesn't look bad.
Though for Supreme Commander, I have to set to low details.
Last week I spent a considerable amount of time browsing Shanghai's computer malls, looking for another low profile video card to replace it.
Unfortunately, I could not find a single card higher than the one I am currently using. I know there should low-profile 7600GS available, I even showed the website to the store clerks, but no luck. Probably just another paper launch product.

RE: Darn
By R3MF on 3/13/07, Rating: 0
RE: Darn
By saratoga on 3/13/2007 12:01:29 PM , Rating: 2
Pins are really, really expensive. Adding more pins means you need a big die (more transistors). If they can get away with a 128 bit bus, they're going to do it. I don't know what it's like for AMD or Nvidia's parts, but in general theres going to be a hard limit on how small you can make a die and still have enough room for 256 data pins. Which means that certain segments of the market are always going to be 128 bit.

RE: Darn
By TechLuster on 3/13/2007 3:21:20 PM , Rating: 2
I understand what you're saying, but consider the following:

Assuming G84 has 64 shaders as expected and assuming it had 2/3 the ROP's of the GTX (as opposed 1/3 if it uses a 128-bit mem interface as expected--more on this later), then it should end up around 300-400 million transistors. On an 80nm process, this will result in roughly the same die size as the 256-bit G71 (7900). Furthemore, the X1950 PRO has 330 million on an 80nm process with a 256-bit interface.

So if they can sell us 7900's and X1950's with 256-bit connections for around $150 now (my $140 EVGA 7900GS KO arrived yesterday), why can't they sell us a ~$225 G84/RV630 with 256-bit memory interfaces? I think this is exactly what enthusiasts on a budget have been waiting for.

In the case of G84, a 128-bit interface with the GeForce 8 architecture implies these cards will only have 1/3 the ROP's of the 8800GTX. Hence, the first wave of midrange cards from Nvidia will be crippled in two ways (both memory and ROP power). Hence, I believe G84 is just a stopgap until they can roll out midrange cards on 65nm using GDDR4. This will allow them to increase both core and mem clocks, making up for the lack of ROP power and bus width. They're not giving us more hardware in the meantime so that they can maintain pin-combatability with the 6600/7600GT's.

(Of course, we all know what they really should have given us: a 384MB 192-bit midrange card. How perfect would that have been?)

RE: Darn
By scrapsma54 on 3/13/2007 7:02:11 PM , Rating: 2
Your over looking the fact that Gddr4 makes up for the low memory interface. In fact your overlooking how much wattage it consumes, how much it costs, and how much performance it packs per watt. As long as it shadows the performance a 8800gtx within 5 frames has for under 150watt consumption and has a much cheaper price (why shouldn't it since it uses 65nm manufacturing process) I gots my money on it. also, 128bit may seem low in comparison to the g80's, but realize that 128-bit has been around for 6 gpu generations and 256-bit was introduced much later in the GeForce6000 series.

RE: Darn
By InsaneScientist on 3/13/2007 9:44:05 PM , Rating: 2
If they were going to keep the memory speeds the same as the current gen stuff, I'd be right there complaining with you, however....

Consider for a second, what matters is not truly the bus width, what matters is the memory data rate. While increasing the bus width is certainly one of the fastest ways to increase data throughput and increase it quite a lot, it's also expensive. Going from 128-bit to 256-bit increases the cost of manufacturing by an incredible amount. (I don't remember the figures, but I think you're talking about a couple more layers on the PCB) The more economical solution, if possible, is to first ramp up the clock speed on the memory, and only then increase the bus width.

We know that GDDR3 is capable of going considerably faster than it's clocked on current midrange cards (IIRC they've gotten GDDR3 up to 800MHz, which would be a 1600MHz effective data rate - far beyond current midrange cards).
And then once we exhaust the potential GDDR3 has to offer, we have another more economical solution before we go to 256-bit: we simply swap out the GDDR3 chips for GDDR4 , which we've already seen break 1GHz (2GHz effective data rate) and GDDR4 is still growing...

And look at the chart: one of the midrange cards does exactly that: it's equipped with GDDR4.

The more something costs them to make, the more it will cost us as consumers. It's better for us if they can increase the bandwidth without increasing the bus width, because otherwise it would cost us a lot more.
As long as they can increase the bandwidth, it doesn't matter how they do it.

RE: Darn
By bargetee5 on 3/15/2007 10:13:19 PM , Rating: 2
Gddr 4 halves the amount of interface needed to achieve the same amount of performance a 512-bit interface needs. Since gddr 4 requires less wattage and carries double the bits per transmission. Equals an effective 92.34GB/s, much higher than the data rate of the Geforce 8800gtx which ironically uses a 384-bit interface.

RE: Darn
By InsaneScientist on 3/15/2007 11:03:39 PM , Rating: 2
What are you talking about?

The only way you can halve the width of the bus and keep the performance the same is if the memory on the narrower bus is running at double the clock speed of the other.
While GDDR4 does allow for higher speeds, there is nothing inherent about the technology that allows it to go faster.

It's like the transition from DDR to DDR2 on the desktop. Assumming that they are both running in dual channel, DDR running at 400MHz will have the exact same bandwidth (6.4GB/s) as DDR2 running at 400MHz.
Now, the latencies on the DDR2 will be higher, but that's a different category.
Granted DDR2 can hit 800MHz, and therefore achieve that 6.4GB/s with half the bus width, but the tradeoff is that the speed must be doubled to do that.

RE: Darn
By Zoomer on 3/17/2007 9:16:20 AM , Rating: 2
He probably thinking GDDR4 = QDR. Sorry, I would like that too, but this isn't it.

"There is a single light of science, and to brighten it anywhere is to brighten it everywhere." -- Isaac Asimov
Related Articles
The Future of HDMI
February 19, 2007, 12:47 AM
ATI's Holiday GPU Lineup Unveiled
August 28, 2006, 3:05 PM
More ATI RV550 Details
July 31, 2006, 1:31 PM
ATI CrossFire Bridge Sighted
June 11, 2006, 12:31 PM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki