Print 50 comment(s) - last by carage.. on Mar 20 at 10:12 AM

AMD packs next-generation AVIVO high-definition video decoding features into its value and mainstream lineup

AMD’s next-generation value and mainstream products are set to bring DirectX 10 and high-definition video playback to the masses. Although AMD is late to the DirectX 10 game, the upcoming RV610 and RV630 feature second-generation unified shaders with shader model 4.0 support. AMD has remained hush over the amount of unified shaders and shader clock speeds of its next-generation value and mainstream products though.

AMD is prepared to take on NVIDIA’s PureVideo HD with its next-generation AVIVO video processing. AVIVO is receiving its first upgrade since its introduction with the Radeon X1k-series with the RV610 and RV630. This time around, AMD is integrating its Universal Video Decoder, or UVD, for hardware decoding of H.264 and VC-1 high-definition video formats.

AMD’s UVD expands on the previous generation’s AVIVO implementation to include hardware bit stream processing and entropy decode functions. Hardware acceleration of frequency transform, pixel prediction and deblocking functions remain supported, as with the first generation AVIVO processing. AMD’s Advanced Video Processor, or AVP, has also made the cut for low power video processing.

Integrated HDMI with support for HDCP joins the next-generation AVIVO video processing for protected high-definition video playback. Unlike current HDMI implementations on PCIe graphics cards, RV610 and RV630 integrate audio functionality into the GPU. Instead of passing a PCM or Dolby Digital signal from onboard audio or a sound card, RV610 and RV630-based graphics cards can directly output audio – removing the need of a separate sound card.

RV610 and RV630 support PCIe 2.0 for increased bandwidth. Native support for CrossFire remains, as with current ATI Radeon X1650 XT and X1950 Pro products. AMD will also debut RV610 and RV630 on a 65nm manufacturing processor for low-power consumption. Expect RV610 products to consume around 25 to 35-watts. RV630 requires more power at around 75 to 128-watts.

AMD currently has four RV610 reference designs based on two RV610 variants – Antelope FH, Antelope LP, Falcon FH and Falcon LP reference boards and RV610LE and RV610PRO GPUs. Antelope FH and Antelope LP are similar; however, Antelope LP is the low-profile variant. Both reference boards feature 128MB or 256MB of DDR2 video memory clocked at 400 MHz. Antelope boards employ the RV610LE, feature passive cooling and consume less than 25-watts of power.

AMD’s Falcon LP reference board is another low-profile model with 256MB of GDDR3 memory clocked at 700 MHz. Falcon LP takes advantage of a DMS-59 connector for dual video outputs while maintaining a low profile form factor. The Falcon LP reference board employs active cooling to cool the RV610LE or RV610PRO GPU.

AMD Antelope FH, Antelope LP and Falcon LP only support software CrossFire – all lack support for the CrossFire bridge connectorHKEPC confirmed this CrossFire setup in a recent report last week.

The Falcon FH reference board is the performance variant and designed for the RV610PRO ASIC with 256MB of GDDR3 video memory. AMD estimates board power consumption at approximately 35-watts, though it is unknown if Falcon FH boards will feature active or passive cooling. Falcon FH is the only RV610 reference board to support AMD’s CrossFire bridge connector for hardware CrossFire support. Falcon FH also features VIVO capabilities.

RV630 has three reference board configurations – Kohinoor, Orloff and Sefadu. Kohinoor is the high-performance RV630 variant and features 256MB or 512MB of GDDR4 memory. It also features VIVO and dual dual-link DVI outputs. However, it consumes the most power out of the three RV630 reference boards, requiring 121-watts for 256MB models and 128-watts for 512MB models.

Orloff falls in the middle with 256MB of GDDR3 video memory. Orloff lacks the video input features of Kohinoor but supports HDMI output. AMD estimates Orloff to consume less than 93-watts of power. Kohinoor and Orloff support PCIe 2.0 and native CrossFire. Kohinoor and Orloff require additional power via PCIe power connector though.

Sefadu falls at the bottom of the RV630 lineup and features 256MB or 512MB of DDR2 video memory. HDMI remains supported, as with Orloff though. Power consumption is estimated at less than 75-watts, and does not require the additional power supplied by a PCIe power connector. All RV630 boards feature 128-bit memory interfaces and occupy a  single-slot.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Wait... PCIe 2.0?!
By sdsdv10 on 3/13/2007 1:43:15 PM , Rating: 2
That is good to know. thanks! I was afraid we were going to have another AGP to PCIE debacle.

Please explain how you feel the AGP to PCIe change was a debacle. I would like to know you thoughts on a better way to replace one hardware standard with another. This process is always difficult and I don't see any simple way to make it easier. At some point, you just have to make the switch. Same thing with DDR to DDR2. In most cases the benefits of the new technology outway the cost and difficulties associated with the change over.

RE: Wait... PCIe 2.0?!
By Polynikes on 3/13/07, Rating: 0
RE: Wait... PCIe 2.0?!
By othercents on 3/13/2007 3:12:34 PM , Rating: 2
I purchased my first AGP system in 98 and upgraded the video card and/or motherboard almost every year. This have given me almost 8 years worth of use along with about 6 different video cards. PCI Express was initially created in 2004, but we didn't see mass production of the standard until 2005. I didn't purchase a PCI Express machine until mid year 2006.

Based on the video tests that I have seen, video cards haven't see much of a performance boost with PCI Express compared to the AGP counterparts. However the bus speed with PCI Express far exceeds AGP and will be more apparent with the newer video cards.

If you compare AGP to your standard PCI bus you will notice a major difference in speed especially with the last 8x versions of the bus. This is one of the reasons why I could never say that AGP was a debacle. It had a very good purpose when it was created and another technology became a better choice. As technology moves forward you will find other technology that does the same (IE. DVI vs HDMI).


RE: Wait... PCIe 2.0?!
By D4rr3n on 3/13/2007 4:40:24 PM , Rating: 2
Based on the video tests that I have seen, video cards haven't see much of a performance boost with PCI Express compared to the AGP counterparts. However the bus speed with PCI Express far exceeds AGP and will be more apparent with the newer video cards.

If you compare AGP to your standard PCI bus you will notice a major difference in speed especially with the last 8x versions of the bus.

Yes, exactly right, the only advantage performance wise PCIe brings is the ability to connect multiple video cards together. PCIe at any speed rating will give you ZERO performance increase over AGP8x versions of the same card. Not only that, 4x and 8x AGP do not perform any differently either (so therefore 2 cards with the same exact specs one on PCIex16 and the other on 4xAGP should perform equally as well). This is simply due to how the system works and the transfer speed of the agp or pcie connection not being a bottleneck. This will continue to be true as long as the memory on the graphics card and your system memory is adequate. The transfer speed only comes into play when all the the information loaded to your video cards memory (mostly large textures and other stuff at the load of a level) has completely filled it and it needs to request a large amount of information or a bunch of small transfers back and forth repeatedly MID GAME. If that doesn't happen you will never, EVER notice a difference. With the memory on some of todays cards and future cards that should never be experienced.

RE: Wait... PCIe 2.0?!
By D4rr3n on 3/13/2007 4:11:02 PM , Rating: 2
Well this is my take on it (and I'm not giving you a hard time or ragging on your or anything like that), if you want to stay on the cutting edge you are going to be replacing lots of equipment quite often. There is simply nothing you can do....other than accepting your decision to either stay on the cutting edge or live with what you have. I'm not sure if you're saying you want it to last 1.5 or 3 years? If you expect 3 expect to be disappointed as you watch technology pass you by. You may get 1.5 and you may not, it really depends on your timing and luck.

Just some stuff of the top of my head for an example of recent changes: ddr to ddr2 to ddr3 soon was very rapid, AMD changed sockets fast enough to make your head spin I believe at least 6 sockets in the past 3.5 years with another 2 by the middle of next year, the move from ata to sata and now sata 3gb (with Intel going sata only on some boards) and esata and sas trying to creep in as well, the rapidly increasing power requirements and connector types that may have required a new PSU, of course the agp to pcie switch and many more things I'm sure I'm forgetting. Unfortunately it is just the nature of the beast.

I certainly understand where you are coming from though. My biggest complaint is the cost of AGP cards ESPECIALLY at retail. I understand part of pcie being made the standard quite quickly was because it also cut down on manufacturing costs but when has that EVER affected the pricing of premium hardware? Generally better performance = higher price, worse performance = lower price. I have 2 older systems networked here and I don't use them much but I was looking to get a new agp card for one of them and the pricing is just assbackwards.

But anyways if you managed to get 3 years out of your system you should be more than satisfied, IMHO that is a long time. I probably average a new build once a year though it'll vary on which new tech is needed or available. I've gone as low as 6 months between full builds and if I was lucky I got 1.5 years out of the system. It's not as expensive as you think, I can easily turn around a pc to a coworker or friend who isn't technically inclined and will pay a premium. And of course there is always ebay where people will buy anything. Just sell your equipment you plan to replace before any serious depreciation occurs and you shouldn't have many complaints.

RE: Wait... PCIe 2.0?!
By bob4432 on 3/14/2007 11:09:14 PM , Rating: 2
i wouldn't say esata and sas trying to creep in, they are for different uses - esata - obviously external sata - good for external hdds but not much better than 1394a/b or usb2 unless you use the hell out of the drive in which case esata=sata.

sas=servers - using 2.5" 10K sas hdds give datacenters a very high amount of storage in GB per area used (not sure how they measure - i rack units, or whatever) so that is where that is coming in. but at least they are making sas controllers able to control sata drives :)

"So if you want to save the planet, feel free to drive your Hummer. Just avoid the drive thru line at McDonalds." -- Michael Asher
Related Articles
The Future of HDMI
February 19, 2007, 12:47 AM
ATI's Holiday GPU Lineup Unveiled
August 28, 2006, 3:05 PM
More ATI RV550 Details
July 31, 2006, 1:31 PM
ATI CrossFire Bridge Sighted
June 11, 2006, 12:31 PM

Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Laptop or Tablet - Which Do You Prefer?
September 20, 2016, 6:32 AM
Update: Samsung Exchange Program Now in Progress
September 20, 2016, 5:30 AM
Smartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki