Print 92 comment(s) - last by phantomlives.. on Aug 27 at 4:21 AM

  (Source: lol screenzors)

Compare this image, with the image above. Note the missing components, and how they are present in the image above. These images will be combined to form the final image. This division is based on how fast each GPU is, and allows for near-linear scaling.  (Source: PC Perspective)

On the left the full screen is displayed, on the right one of the GPU's workload. Note the scene on the right is missing the floor, which is being rendered on the other GPU.  (Source: PC Perspective)
New chips open the door to gaming rigs with a mix of ATI and NVIDIA cards

NVIDIA is busily plugging away with its 200 series and marketing various SLI solutions in the form of anything from a pair of 8000 or 9000 cards to its top end -- a pair of 280 GTXs.  AMD is similarly pushing its 4850/4870 CrossFire solutions along with CrossFire for its new dual-GPU 4870 X2 cards.  The key thing is AMD/ATI cards are not compatible with NVIDIA cards -- CrossFire and SLI are two different technologies.  Furthermore, most motherboards either support SLI or support CrossFire -- most don't do both.

Enter Lucid, also known as LucidLogix, a fabless semiconductor designer (meaning it outsources its chips to other company's fabs, such as TSMC).  Lucid is far from a known name in the graphics industry, though that may soon change.  With the help of Intel Capital backing and over 50 patents, it has developed a technology that seems poised to rock the graphics industry.

The groundbreaking technology is titled the HYDRA Engine.  The accomplishment of the engine is nothing short of unbelievable to those who follow the graphics industry.  It uses hardware and software to allow virtually any AMD/ATI and NVIDIA GPU to work together and share workloads with the CPU, scaling programs almost linearly.  You could probably call the HYDRA Engine CrossFire-SLI, though you might run into a spot of legal trouble in trying to do so.

Lucid isn't just redeploying existing technologies -- it's improving on them.  AMD/ATI and NVIDIA use two technologies for their multi-GPU solutions.  One is split frame rendering, in which each card renders a part of the frame.  The drawback of this is that it requires synchronization of all texture and geometry data on both GPUs and thus memory bandwidth limitations from a single card remain.  The other solution commonly used is alternate frame rendering.  This approach also has a significant downside, in that it introduces latency in the time it takes to switch between GPU connections.

The HYDRA Engine offers a hybrid solution.  The heart of the engine is the silicon chip, which splits up the graphics workload in hardware.  Lucid has a unique driver which will interface Direct X to GPU vendor drivers, after the division of workload.  Information from games first gets passed to Hydra's software, which splits it into tasks.  The set of tasks is then set to Lucid's hardware, which splits the work up between up to 4 GPUs.  A typical task might be rendering a specific part of a scene, adding lighting, or other common graphical chores.  After the GPUs finish their respective parts, it sends them to one of the GPUs to coalesce into a final output.  The whole process is very fast.

According to the Lucid the system has "virtually no CPU overhead and no latency" when compared to single-card solutions.  The approach is very different from AMD/ATI and NVIDIA's work in that it actually intercepts Direct X calls before sending them to the GPU and intelligently splits scenes up, as opposed to "brute force" rendering them.

While the engine is capable of cruder split frame rendering, which it performs well, it really shines when it splits the scene up with this custom logic.  Individual components in a scene -- say part of the floor and windows -- are sent to one GPU while other parts -- say your character and the walls are sent to the other.  With virtually no additional overhead the entire scene is rendered nearly twice as fast.  Where SLI/CrossFire offer only 50-70 percent scaling at best, Lucid claims its solution is near 100 percent -- linear scaling.

One of the strongest points is that the engine is not reliant on specific graphics drivers.  Thus graphics cards and drivers can come and go, but the engine will still work. 

The hardware/software may find its way into graphics setups in two ways.  The first, it could be added to motherboards to allow improved multi-GPU and support for both AMD/ATI and NVIDIA.  Second, it could be deployed by card manufacturers on dual-GPU boards, such as the 4870 X2, in place of the standard PCIe bridge.

Unfortunately there is one key catch with current technologies.  Current operating systems like Windows Vista only support one graphics card driver simultaneously running.  So until Microsoft allows AMD/ATI and NVIDIA drivers to coexist, a dual system remains entirely impossible and out of Lucid's hands, short of hacking the OS.

Even if this capability is never supported, though, if Lucid can merely live up to its claims, it will be a groundbreaking development in the graphics industry.  Not only will it allow for repurposing of old graphics cards, but it will render NVIDIA's SLI chipsets and ATI's CrossFire connectors essentially obsolete.  Further, if offered at a reasonable price, it would be hard for motherboard makers to not put one of these on their board to offer users to choose between the two graphics giants.

CrossFire-SLI may be a bit in the future still, but Lucid is dreaming big, and NVIDIA and ATI/AMD better watch out.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

if it's too good to be true, it's probably not true
By Dribble on 8/20/2008 11:21:51 AM , Rating: 5
While I am happy to be proved wrong, history tells us that unknown companies promising magical solutions generally don't deliver.

By Brandon Hill on 8/20/2008 11:22:52 AM , Rating: 5
You mean to tell me you don't have a Segway?

By jabber on 8/20/2008 11:31:49 AM , Rating: 5
Hopefully we'll be looking back in 5 years and laughing at how utterly retarded SLI etc. was.

"Yes folks used to buy two $500 graphics cards even though they knew that chances are they would only get a 20% boost over using just one card!"

Mmmm intelligent render load balancing (my description).

IRLB...sounds catchy.

By jabber on 8/20/2008 12:30:51 PM , Rating: 5
"No one is "retarded" for paying extra for more performance. "

Hmmmmmm careful now. I think you are on thin ice there.

Doesnt your first statement contradict your second statement above?

By Diesel Donkey on 8/20/2008 1:17:20 PM , Rating: 5
Right, because the only thing a Ferrari has on a Civic is higher top end speed...I think something is missing here.

By feraltoad on 8/21/2008 5:21:39 AM , Rating: 3
You're right. The Civic has cup-holders. The nice kind that can hold a Big Gulp or a Route44 Cherry Limeade. Plus, if you buy the kind of cup holder that hangs on the door, when the Ferrari's butterfly door opens the drink will fall out on your head. Boy, that would be embarrassing!

By sviola on 8/21/2008 9:22:58 AM , Rating: 3
The word you are missing is "p****-magnet" :)

(read it with Cazakistan accent, please :D)

By mmntech on 8/20/2008 12:23:25 PM , Rating: 3
The whole idea of multi card systems became retarded when dual core CPUs were introduced. I would have thought we'd see GPUs with multi-cores on one die by now. I guess there's more financial incentive to sell the same item twice, rather than seeking more practical solutions. The X2 cards (two GPUs on one PCB) are starting to change that but they're doing nothing that 3DFX didn't already try 10 years ago.

This certainly is an interesting concept though it is not that revolutionary. I've heard of modders being able to coax Crossfire to run on SLI boards and vice versa. There was even a rumour that allowed SLI to be run on ASRock's PCIe/AGP boards. The real sticking point is the card mixing. I doubt AMD or nVidia is going to sit on their laurels when that piece of tech comes out since it takes away the multi-GPU monopoly both companies have.

By Silver2k7 on 8/20/2008 1:28:42 PM , Rating: 2
A hydra chip and 3-4 GPU chips on a single PCB its a nice dream, who will be the first to realize it ;)

By djc208 on 8/20/2008 2:27:36 PM , Rating: 5
Probably no one since you'd need a small nuclear reactor to power the monster. Then there's the cooling tower needed for heat dissipation (from the card, the reactor needs it own).

By StevoLincolnite on 8/20/2008 2:03:07 PM , Rating: 2
they're doing nothing that 3DFX didn't already try 10 years ago.

Actually ATI was the first Manufacturer to release a Dual GPU Single PCB Design called the "Rage Fury Maxx" Which was 2 Rage Fury Cards on the one PCB, released in 1999. (Holy Cow! 9 years ago! I feel Old, I had one of those cards when they first came out...)

Actually I think the "GPU" naming scheme only applied to cards which supported TnL? I can remember when the Geforce 256 was released how nVidia was claiming it as a "GPU" and described the difference against other non TnL capable cards, and ATI then called it's cards "VPU's".

While 3dfx Muttered that TnL was Useless and Anti-Aliasing as well as it's T-Buffer or F-Buffer was the way to go.

While Matrox held the 2D Image Quality crown, and S3 were having issues with it's Savage Cards having TnL faults.

The Voodoo 5 which came much later had 2 VSA100 chips, and the Voodoo 5 6000 was going to have 4 chips and a separate TnL chip, however it never reached the market.

By gforcefan on 8/20/2008 7:10:26 PM , Rating: 2
Actually, you forgot about the obsidian x24. That was two voodoo2's on one board. And that came out a long time before the ATI.

By Oscarine on 8/20/2008 8:29:03 PM , Rating: 1
Before the X-24 there was SLI Voodoo 1....

Scaleable Realtime 3D Accelerator for PCs Provides Industry Leading Performance

SANTA CLARA, CA - OCTOBER 21, 1997 - Quantum3D, Inc. announced today that it has begun shipping its new scaleable realtime 3D graphics accelerator, the Obsidian 100SB. A replacement for the company's dual-board Obsidian 100 series of products, the 100SB delivers equivalent performance in a single PCI slot along with a host of new integration features— at a much lower price. The 100SB joins the Obsidian family of products for visual simulation, training, coin-op, location based entertainment and game enthusiast applications. The Quantum3D Obsidian 100SB starts as low as $795 MSRP.

Based on a scan-line-interleaved, 4- or 6-chip implementation of 3Dfx Interactive's award-winning Voodoo Graphics chipset, the Obsidian 100SB employs the chipset's patent-pending “texture streaming” architecture to produce up to 2.4 Gigabytes per second of dedicated graphics memory bandwidth. This high level of low-latency bandwidth enables the 100SB to deliver filtered texture fill rate performance of 90 Megapixels per second, with trilinear or bilinear texture filtering with per pixel LOD mip mapping, z-buffering, alpha blending, perspective correction and per pixel fog enabled-- which significantly exceeds the performance delivered by all other PC graphics accelerators, as well as most graphics workstations and image generators, irrespective of cost.

On Gemini Technology's OpenGVS Real World Benchmark Release 2.0, the Obsidian 100SB-4440, when coupled with a 300 MHz Intel Pentium II PC, delivers an average frame rate of 97.1 frames per second--approximately four times the performance delivered by the graphics boards based on the new Evans & Sutherland/Mitsubishi/VSIS 3Dpro/2mp chipset (score of 26.2), five times the performance of the Silicon Graphics O2 (score of 15.4), and almost twice the performance of the Real3D Pro 1000 model 1400 (score of 59.9). Additional information on the OpenGVS Real World Benchmark Release 2.0 may be found at (

"Falcon Northwest integrates only the fastest, most reliable and best supported hardware in the industry into our gaming PCs. Quantum 3D's new Obsidian 100SB has surpassed our standards on all counts, giving us twice the 3D performance of any other PC on the market," said Kelt Reeves, president of Falcon Northwest, makers of the Falcon Mach V Gaming PC. "We're pleased to offer it to our customers."

As an exclusive mode 3D accelerator, the 100SB's advanced pass-though design operates transparently with popular 2D/VGA windows accelerators. In addition, the 100SB offers the option of adding an integrated 2D/VGA capability by means of “MGV”— a 2MB windows accelerator daughter card which eliminates the need for a second 2D/VGA graphics board, which in most systems frees up an additional PCI slot. Another new integration feature on the 100SB is SyncLock-- which enables developers and integrators to synchronize video refresh across up to 13 displays for wide field of view 3D applications and totally immersive environments. This new feature greatly reduces the occurrence of “beat frequencies” and other annoying artifacts that are distracting in multi-channel visual simulation, training, and entertainment applications. The Obsidian 100SB also features simultaneous RGB and NTSC/PAL “TV-out” output capability with support for both S-Video and composite formats. The new accelerator is optimized for running applications under leading primary 3D APIs, including Microsoft Direct3D, OpenGL and 3Dfx Interactive's Glide. The 100SB solution also has a unique on-board authentication feature designed to enhance the protection coin-op games, visualization applications and other proprietary software from software pirates.

"With the Obsidian 100SB, I get a combination of SLI performance and competitive 2D with the MGV daughter card--all in one slot as opposed to three," says W. Garth Smith of MetaVR. "Quantum3D's new 100SB graphic accelerator together with MetaVR's run-time format enables our VRSG product to be a viable alternative to proprietary visual simulation image generators. With the power of P-II based systems and the 100SB, many of our customers have eliminated the need for expensive SGI systems for deployment applications

By goku on 8/20/2008 3:13:10 PM , Rating: 2
Dual core GPUs aren't needed like they are with CPUs. GPUs are inherently parallel and making it more parallel is in a sense making it "multicore". Multicore processors are just multiple processors on one die, not multiple (die/dice/dies?) with a processor per die hooked up with wires. The reason for this is because you can't add complexity to a CPU with out having software specifically written for it but on a GPU all you need are new drivers.

Since you can't add complexity to a CPU with out any major software changes, OS changes, patches or what ever, you're limited in how to boost performance. Smaller processes allow for higher clock speeds which is what you've seen, but because 100 million transistors makes for a very small die at 32nm process, you end up with a high yield in silicon and you have to produce a lot MORE processors in order to break even in costs. So what do you do? Add more transistors. Well since you can't be adding instructions like SSE, MMX etc. because it takes time for software to take advantage of and is limited in how much it can improve performance, you've got to find a way to use up more silicon space.

That is where multicores come in. So instead of one processor, you have four, on one die, connected together in a very crude way (Intel) and now you've just increased the amount of silicon you're using while theoretically increasing performance 400%, a win for you (since you need to use up the silicon) and win for the consumer who supposedly gets a massive performance increase (never happens).

For GPUs, you don't need to worry about multiple GPU cores. Why? Because GPUs are proprietary, there generally isn't any special software being written for them, no OS interaction with them (unlike a CPU), the only thing that bridges the OS and the GPU is a simple driver, a driver that can be rewritten at anytime. So because GPUs are inherently parallel, that you can add "X number of stream processors" or what ever concoction they thing of next in order to boost performance (aside from a simple clock speed increase), there is no need for "Dual Core GPUs".

I'm not saying a Dual Core GPU is not possible, but what I am saying is that efforts towards a dual core GPU would be better spent making a better, more efficient GPU architecture, something that Intel and AMD can't, won't or don't need to do. (Cause they've already done it?)

Also one important thing you should remember is that GPUs run FAR hotter than CPUs, are FAR more complex (tons more wires hooking up into it), and TONS more power than a CPU. So imagine doubling the amount of heat being produced in the exact same space and you end up with a disaster. But because CPUs have gotten cooler now, multicore works a lot better than it did for the P4 series as those ran hot even on the smaller processes.

By Garreye on 8/20/2008 7:44:17 PM , Rating: 2
I think another big problem with multicore GPUs is yields. GPUs are already huge chips and it's hard to get good fab yields on them as it is and doubling or tripling the size probably wouldn't help matters

By someguy123 on 8/20/2008 11:14:37 PM , Rating: 2
this is one of the reasons nvidia is hurting from their current GT line after ati sprung up with their HD48 pricing. the GT line is essentially a multigpu 9800 from what I understand and has yield problems. it looks like ati really cost them a pretty penny by forcing them into a pricing war.

By theapparition on 8/20/2008 10:44:16 PM , Rating: 3
Hopefully we'll be looking back in 5 years and laughing at how utterly retarded SLI etc. was.

Thanks for the broad uninformed statement.

SLI (or crossfire) has been demonstrated to signifigantly improve design time for some high-end workstation cad and simulation programs. Just because you don't see a need for SLI on games doesn't mean that there's not an application out there that can benefit. If you every had to rotate a 25,000 part model, that takes 1 minute on a non-SLI system to update, or 5 sec on an SLI based one......maybe you'd understand.

By someguy123 on 8/20/2008 11:34:05 PM , Rating: 2
yeah thats true. it should scale much better in those types of apps since you can break up the image into separate threads compared to games where you need to constantly render your point of view and breaking up the single images in threads would probably cause an array of graphical problems.

when it comes to productivity a second saved on each action can add up tremendously.

By Targon on 8/21/2008 8:06:16 AM , Rating: 2
The overall idea of having two or more video cards isn't the issue, it is the connection method between components which causes the limitation we see. Think about it, if you can get two $100 video cards and by using them together get near the performance of a $400 video card, that is a positive. And if you can connect two $400 cards and get performance that can't be touched by any single card, that may also be worth it for some people or applications.

The problem is in the bus design, and how the cards are connected together, as well as the CPU power needed to run the cards at their full potential. The need for a good inter-connect method between the cards is also critical, and where both AMD and NVIDIA have fallen short. AMD has HyperTransport, so why not use that as the method to connect multiple GPUs? With the purchase of ATI, AMD could also push the dead HTX slot to go onto all motherboards and use it to provide a much better connection to a video card than any shared bus design could ever hope to.

All of this would require applications to improve their code, since no matter what, in many cases applications are CPU limited, not GPU limited.

By Sulphademus on 8/20/2008 11:32:07 AM , Rating: 2
By dayanth on 8/20/2008 4:35:47 PM , Rating: 2
What's funny is the the Phantom (company) is back with actual products.

Even funnier, is that they don't even bother to host their own blog site. They're using Wordpress to do so.

And upon further inspection, of their website.. the Phantom lapboard images are ones ripped from MaximumPC... Curious again that they couldn't take their own photos?

By Mojo the Monkey on 8/21/2008 5:00:00 PM , Rating: 2
hahahaha. the "who we are" page has the CEO being self-described as a "rainmaker".

what a joke of a company.

By phantomlives on 8/27/2008 4:08:32 AM , Rating: 2
Actually, in using wordpress, instead of installing the blog on our own server we get much more exposure to those in the wordpress community. In using wordpress it saved time and was more efficent in getting the word out quicker. We are back and shipping lapboards. Our blog will give you the latest news on current progress, NOT speculation.

By phantomlives on 8/27/2008 4:14:28 AM , Rating: 2
If you review the images from our website - You will see that the reason they are taken from Maximum PC and Gizmodo is because the board was reviewed by those companies. I plan to do a full 3D rendering of our new lapboard when it comes out in a few weeks. Just wait and see, we are a very new Phantom Entertainment - with new people and hungry to succeed where others have fallen short the past 4 years. Stay Tunned.

By phantomlives on 8/27/2008 4:21:41 AM , Rating: 2
Speaking to your deposit on the Phantom Console.. do you mean the Phantom Lapboard? What ever the case is - shoot an email to with whatever information you have reguarding your pre-purchase order and I'll try and get to the bottom of it - ya 5 years may have past - but we are a different species these day -

By itlnstln on 8/20/2008 11:57:12 AM , Rating: 4
Sheeeiitt, we'll just have to wait and see what Bit Boys Oy! has waiting for us up their sleeve.

By MrBlastman on 8/20/2008 12:12:25 PM , Rating: 5
I can only hope that it is true. Even if it only works on a 100% Nvidia or 100% ATI graphics system.

Imagine being able to plug those older cards back in and have them instantly working in unison with the newer ones, giving you increased framerate.

I don't expect Nvidia or ATI to be happy about this... They might be forced release their newest graphics cards at lower price points... That or sabotauge the workings of their older products through drivers.

By vapore0n on 8/20/2008 12:38:28 PM , Rating: 2
They are after all, a patent and design holding company.
They need to sell their idea to a company that can produce.

Given that Intel is paying for the R&D, it would be cool if Intel would use this to boost their own video cards. God knows they do need it.

By Silver2k7 on 8/20/2008 1:30:42 PM , Rating: 2
It would be very cool if it was on the intel motherboards..

By bighairycamel on 8/20/2008 5:03:01 PM , Rating: 2
Given that Intel is paying for the R&D, it would be cool if Intel would use this to boost their own video cards. God knows they do need it.

Bah, that's the last thing I would want. Intel's problem is that they have a hard on for onboard graphics processing. Any enthusiast would rather have an expansion slot card for upgradability and increased performance. I would love to see NVidia or AMD/ATI take advantage of this, but time will only tell.

By noirsoft on 8/20/2008 5:40:05 PM , Rating: 3
Imagine, though, if instead of just shutting off the onboard GPU when you plug in an expansion card (or two or three) the onboard GPU becomes the controller for this process, maybe even contributing some to the rendering as well, giving you faster performance than with just the expansion card alone.

Theoretically, you would also just keep your monitor plugged into the onboard port unless you wanted to run dual-head. This could also tie in to hybrid graphics where the outboard card is powered off for low-intensity tasks while the onboard GPU is used.

This being integrated into the motherboard/onboard GPU strikes me as having the most exciting possibilities.

By danrien on 8/20/2008 12:46:06 PM , Rating: 2
In this case though, Lucid has received nearly $50m in VC funding from Intel, so it legitimizes the solution. Also, the technology has already been demonstrated, so it's obviously in (or close to) fighting form.

By FITCamaro on 8/20/2008 1:17:57 PM , Rating: 3
Why would they release performance stats when its not done? And I doubt Intel would devote $50 million dollars to a pipe dream.

And you have something against Israelis?

By MatthiasF on 8/20/2008 4:07:12 PM , Rating: 1
They have snapshots above of it working, why can't they release performance stats? Even benchmarks of development demo software would be fine.

Intel Capital throws around $1 billion USD yearly (2006 number off their website).

I have nothing against Israelis, why do you ask? Everyone that works for them is Israelis, so I mentioned it. If they were South Koreans, I would have said "Bunch of Koreans". If they were all from MIT, I would have said "Bunch of Losers". Sometimes I think you guys read into things too much.

By StevoLincolnite on 8/20/2008 6:55:30 PM , Rating: 4
Sometimes I think you guys read into things too much.

Well this is a "Blog" of sorts, filled with text, reading is all that we can really do, besides typing and clicking some links.

By kevinkreiser on 8/20/2008 1:52:43 PM , Rating: 2
this guy has a good point that i was also wondering about. how the hell can you just split the scene up between the two cards when you have to do things like alpha or AA or reflections. it just doesn't seem realistic because performing those computations on single piece of the scene relies on knowing/having the surrounding geometry. It seems like complex scenes may be difficult to parrallelize because of geometric interdependencies.

By FITCamaro on 8/20/2008 2:18:19 PM , Rating: 2
Yeah and as such, things that are dependent upon one another will be tasked to the same GPU. But you could draw a shadow independent of the ground. You just overlay one image onto another.

By MatthiasF on 8/20/2008 4:37:04 PM , Rating: 2
Unless you're creating a simple fill shadow, you're going to need almost every light source in the scene to make an accurate radiosity shadow look good.

Reflections are what make for realism, whether it's a realistic sheen to a polished marble floor, or reflection of simple sunlight on all the walls of a room (radiosity).

Games continue to push forward in providing this level of realism, so how are they going to deal with this evolution? Spend extra passes on melding their split images together to fix the lighting or use complicated alpha masks?

Seems like anything fancy they do to get it working would cut down on the efficiency more than the processes being utilized today.

Course, it could be something revolutionary but we're not seeing it yet. So far it's vague press releases and photos of a rig without any details of the hardware running, the game (is that Unreal?) or the performance stats.

By bighairycamel on 8/20/2008 5:05:16 PM , Rating: 2
And that's exactly the reason SLI / Crossfire was never all it was cracked up to be. 1 card still has to do 90% of the computations aside from processing the actual output.

By Targon on 8/23/2008 7:10:21 AM , Rating: 2
If you look at how DirectX works(with the applications going to the API, which then goes to the drivers for final rendering), it is possible that the DirectX calls are being intercepted and before anything goes to the video card, this new process would break down the rendering between cards.

The primary issue has always been how the rendering is processed. Crossfire has quite a few modes to it, including tiling, so the rendering isn't a frame at a time per card, but is just one piece. The size of what the card does in theory would depend on the processing power of the cards involved, so a Radeon x300 would only handle a small part of the full frame compared to a Radeon HD 4780. Since each pixel is processed by itself in most situations, the more pixel pipelines you have, the better.

We have seen the video cards with multiple GPUs on them, and they work well. When you go to multiple video cards, the primary issue is how you connect the two cards together. If you go through the PCI Express bus, you might expect a lot of issues, which is why you see a bridge that connects the video cards together, just to avoid that sort of problem. AMD could use HyperTransport with their bridge to do a very good job of linking multiple physical cards together without touching the PCI Express bus as a way to really take advantage of the potential here.

By Oregonian2 on 8/20/2008 1:33:42 PM , Rating: 2
While I am happy to be proved wrong, history tells us that unknown companies promising magical solutions generally don't deliver.

True. Most new restaurants fail but all successful restaurants used to be an uknown new one at some point in history. Likewise, all the companies who do indeed deliver magical solutions used to be an unknown company at some point in history, including Intel.

So statistically, you're right -- but that doesn't mean that this company won't make it work.

Curiously, both nVidia and ATI (within AMD) are also fabless (AMD being fab-limited, I recall articles saying they'd not likely to be taking up fab of their ATI acquisition's parts).

By omnicronx on 8/20/2008 2:06:36 PM , Rating: 2
I am 100% on your side, especially now that video drivers have been moved back to the usermode and out of the kernel in windows, which is why I am guessing they specifically say Vista, (and subsequently Windows 7) will probably not support multiple drivers at the same time. At least not in a normal setup.

By Some1ne on 8/20/2008 3:56:34 PM , Rating: 1
I agree. Personally, I call bullshit on their "virtually no CPU overhead and no latency" assertion. According to their own description, they are intercepting each individual DirectX API call, inspecting its parameters, and making a decision about what GPU to forward it off to. None of that comes free.

You've got to use CPU cycles to intercept and then fork the command stream, and you've got to use additional CPU cycles to inspect each command to determine which GPU it should go to, and in order to know which commands should be sent to which GPU, you've probably got to construct and maintain a hashtable or some such datastructure that maps each graphics object/primitive to the GPU responsible for rendering it, which implies frequent memory accesses (unless the entire table fits into cache memory), both to build/maintain the table, and also to perform lookups in it when determining how to route the individual API calls.

I think there's a reason this technology is being demoed in conjunction with Nehalem. It needs the new processor's integrated memory controller and improved performance in order to be feasible.

By Garreye on 8/20/2008 8:04:06 PM , Rating: 2
You do realize that this is hardware right? It can be designed to do all the things you listed within the chip without using the CPU.

I'm thinking the idea is to have the CPU issue the DirectX commands exactly as they are now, but to the Hydra chip instead of the GPU. That chip will split up the commands appropriately and in turn send them to the GPUs. It will then get the results from each GPU and tell one of the GPUs to merge the results and output them to the screen. This may not be exactly how it will work, but its entirely feasible to do this without any CPU overhead assuming the hardware is capable enough.

By myocardia on 8/21/2008 11:07:02 AM , Rating: 2
Personally, I call bullshit on their "virtually no CPU overhead and no latency" assertion.

You're right, it does require CPU power. That's why it has it's own CPU. What they meant by "no CPU overhead" was that there is none on the host CPU.

Now, whether or not there is some latency involved, I guess we'll just have to wait and see. I can almost guarantee you that it won't have microstutter, at least, and that's a good thing.

By theapparition on 8/20/2008 10:37:58 PM , Rating: 2
Tiny companies like Microsoft, Apple, and Cisco all died out. I guess no one can actually make a start-up anymore.

Slight problem
By Saist on 8/20/2008 12:30:03 PM , Rating: 1
There's just one slight problem with this tech. It's built against DirectX. Unfortunately, developers and publishers are moving away from DirectX to cut multi-platform development costs, and many are coming to realize that DirectX is a multi-million dollar development mistake that all but the largest publishers and developers can't make.

So, until these guys show this off working with OpenGL? It has no future in the games industry.

RE: Slight problem
By Icelight on 8/20/2008 12:32:39 PM , Rating: 2
Game developers are moving en masse to OpenGL?

If by game developers you mean one or two holdouts, then yes, you're absolutely correct.

RE: Slight problem
By markitect on 8/20/2008 12:50:20 PM , Rating: 1
If by game developers you mean one or two holdouts, then yes, you're absolutely correct.

If by holdouts you mean the growing number of developers that are developing for OSX, then yes.

RE: Slight problem
By Master Kenobi on 8/20/2008 2:41:14 PM , Rating: 2
Most developers porting to OSX are using a VM wrapper. The only ones I can think of that natively use OpenGL is Blizzard, Raven, and ID.

RE: Slight problem
By FITCamaro on 8/20/2008 3:20:02 PM , Rating: 2
Doesn't the Unreal 3 engine have a native OpenGL side of it?

RE: Slight problem
By SilthDraeth on 8/20/2008 4:59:38 PM , Rating: 3
Those are the only developers that matter. In Fact Blizzard is the only one that matters, as far as PC games are concerned.

RE: Slight problem
By bighairycamel on 8/20/2008 5:06:54 PM , Rating: 3
What have you been smoking???

RE: Slight problem
By vapore0n on 8/21/2008 8:19:05 AM , Rating: 2

RE: Slight problem
By FITCamaro on 8/20/2008 1:22:01 PM , Rating: 2
I don't think this hinges around DirectX. They'd simply have to come up with a library that looks at the OpenGL calls. I don't think it would require them to start from scratch.

And plenty of games developers still use DirectX. Last I looked, OSX has a handful of games. And of the ones come out soon, how many are available for it.

The only main cross platforms stuff happening is between the 360, PS3, and PC.

RE: Slight problem
By FITCamaro on 8/20/2008 2:19:12 PM , Rating: 2
And if you read the sneak peak, it says it works with OpenGL as well.

RE: Slight problem
By Penti on 8/21/2008 11:37:06 AM , Rating: 2
There are plenty of game engines that have both DX and OGL rendering paths, specially those whose available for the PS3 plattform. It's just a matter of choosing the right game engine if you want to run your game across many plattforms. If you buy a game engine that works on the plattform you want to run your game on you don't need to port it. Porting would be needed if you bought a Windows OGL engine you want to run on OSX so it's more to it then which APIs your using. But nobody wants to write OSX game engines yet, so most games are ported by a third party.

Couple of counterpoints
By Creig on 8/20/2008 12:16:15 PM , Rating: 2
One of the strongest points is that the engine is not reliant on specific graphics drivers. Thus graphics cards and drivers can come and go, but the engine will still work.

Yes, but it will still require drivers of its own.

The hardware/software may find its way into graphics setups in two ways.

So being "not reliant on specific graphics drivers" isn't really an improvement since it will be relying on its own set of drivers.

Also, even if this works as well as its marketing spiel would suggest, I would imagine that Nvidia would take steps to disable HYDRA from working on their video cards. They did it before with ULi, they'll do it again with HYDRA.

They are insanely protective of SLI and the extra cash they get from the sale of their SLI motherboard chips.

RE: Couple of counterpoints
By Spivonious on 8/20/2008 12:28:44 PM , Rating: 1
Hydra sits between the DirectX calls and the graphics driver. Call it a driver if you want to, but there's no hardware involved so it's more like just another library.

RE: Couple of counterpoints
By danrien on 8/20/2008 12:48:36 PM , Rating: 2
Er... take a look at Anand's sneak peak. There is definitely a chip involved - it sits on the motherboard and uses an incredibly low amount of power. It would be impossible to do this with 0% cpu usage without hardware involved somewhere.

RE: Couple of counterpoints
By AmbroseAthan on 8/20/2008 12:36:02 PM , Rating: 2
While I would agree this could be very hard for LucidLogix to pull off, they are backed by Intel (funding was provided by Intel Capital). With Intel coming out with a GPU product, and investments into the company already, I think we are going to see Intel backing HYDRA, especially if HYDRA can make this work with Intel's furure GPU's.

Not much of a better place to start then with Intel in your corner right before Intel hopes to start getting GPU's made.

RE: Couple of counterpoints
By FITCamaro on 8/20/2008 1:15:18 PM , Rating: 2
While I agree with you about Nvidia's attitude on SLI, they'd only hurt themselves if ATI adopted it. It sounds like this gives far better performance gains than SLI or Crossfire. So if ATI allowed it while Nvidia didn't, ATI would have a performance advantage. Which not only hurts Nvidia's SLI chipset sales, but their GPU sales as well.

If this delivers on its promise, the only way it will fail is if both ATI and Nvidia blacklist it. Oh and Intel assuming Larrabee is worth anything.

RE: Couple of counterpoints
By 1078feba on 8/20/2008 2:32:52 PM , Rating: 2
Exactly right.

If Intel is backing it, expect them to at least end up with controlling interest in the technology, if not outright whole ownership of Lucid. Couple that with Intel's totally dominant position WRT channel access, and you'd have to move your family into a cave IOT escape it. Think about it. Within a very short couple of years, this could easily end up as a basic feature, like a USB port, on every single Intel board produced.

If this tech even comes close to linear scaling, say 80% minimum, and Nvidia & ATI tried to somehow disable it through their drivers, both companies would end up panhandling on the street corner within two years. With Intel owning/backing it, it would just become far too widely available to avoid. This would not be a repeat of the PhysX situation, with a tiny start up trying to elbow their way in. Call Lucid by whatever name you want, but essentially you're going into combat against Intel.

The key here is the scaling. Linear, or near linear, scaling completely and totally banishes SLI/Xfire to the land of obsolescence.

If I'm Nvidia, and I know that this tech is real, I'm shaking in my boots.

If I'm ATi, and I know that this tech is real, I'm wearily looking skyward and releasing a heavy sigh knowing that I will have to deal with Intel in BOTH of my primary markets.

RE: Couple of counterpoints
By FITCamaro on 8/20/2008 3:19:17 PM , Rating: 2
AMD would be relieved by this. It means they don't have to spend money for Crossfire development anymore. And unless Intel makes Nvidia and AMD pay to have their GPUs supported by this chip, its completely free to them and independent of their driver and card. It's basically Intel paying to give users better performance on their cards.

Nvidia faces a possibly harsher road since they use their GPUs for physics now. So the question is how well does this mesh with that since the GPU is doing other tasks aside from graphics? I'm not a graphics developer so I don't know all the details.

By legoman666 on 8/20/08, Rating: 0
RE: Bah
By TemjinGold on 8/20/2008 12:31:10 PM , Rating: 1
Where did you get that idea? What makes you think you can have two graphics drivers simul in XP?

RE: Bah
By gamephile on 8/20/2008 12:54:14 PM , Rating: 2
I've done it before. I specifically recall using an old PCI Permedia2 with a TNT2 and later a first generation Radeon on XP. Unless I'm mistaken, these situations used two different video drivers simultaneously.

RE: Bah
By legoman666 on 8/20/2008 3:02:42 PM , Rating: 2
Because I've done it?
I've had a Voodoo 3 2000 and a ATI Radeon 7000 at the same time.
I've had a ATI 9700Pro and a Voodoo 5 5500 at the same time.
And I've had a ATI 9700Pro and a cheapo nVidia card at the same time.

RE: Bah
By Jack Ripoff on 8/20/2008 12:53:17 PM , Rating: 1
No it doesn't.

It does work in X.Org 7.3 under most Unix OSes though (I've tested under FreeBSD and Linux).

RE: Bah
By General Disturbance on 8/20/2008 1:44:17 PM , Rating: 1
Well, why would anyone WANT to combine ATI/NV anyway...that would just be...gross.
Vista can handle multiple cards of any generation(?) from either ATI/NV just fine though.

By uibo on 8/20/2008 11:33:26 AM , Rating: 2
Are these screenshots fake (just an artistic vision of how this system should work) or they really hacked an operating system?

RE: Screenshots
By Icelight on 8/20/2008 12:30:55 PM , Rating: 2
They might have just been using two of the same brand video cards if it is a real picture.

RE: Screenshots
By FITCamaro on 8/20/2008 1:16:37 PM , Rating: 2
It was probably done on XP which does allow more than one graphics driver to be loaded. But you don't have to hack the OS in Vista if you're using the same brand of card.

Something to add to the Windows 7 Wish List
By AlexWade on 8/20/2008 12:54:49 PM , Rating: 2
Since Windows 7 is still in early development, adding support for multiple GPU drivers is something Microsoft should put in the new version. That would be one way to get people to upgrade their OS.

RE: Something to add to the Windows 7 Wish List
By Silver2k7 on 8/20/2008 1:32:16 PM , Rating: 2
or they could add a vista patch before that, we all know how good MS is with keeping their release dates.. im not holding my breath for Windows 7.

By Penti on 8/21/2008 11:41:55 AM , Rating: 2
You know, Windows 7 is still Windows NT 6.0 like Vista and Server 2008. So i wouldn't expect it to be anything other then another Windows ME.

By DASQ on 8/20/2008 11:07:14 AM , Rating: 2
Where and how much?

Also: I'm sure someone will program some magical driver that combines the catalyst and forceware. Magical, because nVidia's half will probably open some gaping maw into hell and spawn some hideous creature to devour our mortal world.

I've had problems with my 8800's... if you couldn't tell...

RE: so...
By NaughtyGeek on 8/20/2008 11:15:37 AM , Rating: 2
Magical, because nVidia's half will probably open some gaping maw into hell and spawn some hideous creature to devour our mortal world.

This already happened. AMD was released to devour ATI and it almost worked. Fortunately, ATI was able to beat back the evil forces unleashed upon it and turn it into a force for good. ;)

enter linux
By Screwballl on 8/20/2008 12:51:21 PM , Rating: 2
time for then to offer the software and hardware as non-free and get it to allow DX on linux without the need for Wine or Cedega...

we can dream...

RE: enter linux
By FITCamaro on 8/20/2008 1:24:11 PM , Rating: 2
I think the dream is where you saw this thing allows DirectX anywhere. It merely interfaces with it. Not implements it.

By Tegrat on 8/20/2008 1:34:11 PM , Rating: 2
Lay's power company uses Tasers on owners of Chryslers who are
drunk on absolute vodka?

RE: Huh?
By FITCamaro on 8/20/2008 3:21:23 PM , Rating: 2
Did Chef Brian from Ctrl Alt Delete just come to life?

Correct me if I am wrong.....
By Darth Pingu on 8/20/2008 6:16:37 PM , Rating: 2
My graphics programming knowledge is quite limited so feel free to correct me on this.

This technology seems entirely plausible to me, As any graphics pipline can be essentially broken into 4 parts: the Vertex processor; Clipper and primitice assembler; Rasterizer; and Fragment processor. Why cant each job be batched to a different GPU?

Vertex Processor - Preforms coordinate transformations and computes a colour for each vertex.

Clipping and Primitive Assembly - determines what is actually in the field of view objects too far away, behind or off the sides of the screen are removed and not rendered.

Rasterization - Converts verticies to pixles inside the frame buffer.

Fragment Processing - Determines what pixles are located behind other pixles and modifies the pixles accordingly, either removing them if the object infront is solid or modifying their colour if the object is opaque.

Now I do realize that some of these processes must be completed in a specific order but there is nothing saying that these processes must be completed on one single card. Each process could theoretically be devided amongst any number of cards.

By Lugaidster on 8/21/2008 11:07:45 AM , Rating: 2
Since the rendering process works in a pipeline-like way those things depend on the result of the previous pipeline stage. Since the GPU already does all those (and even more) things in parallel you don't gain anything in doing them in different GPUs. It is like having different processing units for vertices and pixels. At any moment you either process vertices or pixels so you have wasted power in there so it's that much better to have them unified.

Aside from that I recommend that you read the article at ExtremeTech since they covered the technology pretty well (,2845,2328495... ). They say that Lucid Logix demoed a machine with two 9800gt (rebranded 8800gt) running Crysis and getting around 45-60 fps so that has to show that the tech works and it has to work well since Crysis is IMHO the most visually demanding game out there.

Apparently they are sampling already so an early 2009 launch seems plausible but only time will tell.

Where's the IPO?
By furax on 8/20/2008 11:15:58 AM , Rating: 2
I'm chomping at the bit here.

Intels Contribution
By xxeonn on 8/20/2008 11:52:50 AM , Rating: 2
It seems to me that this company approached Intel and told them that they had a SLI Killer, and seeing that Intel does not have an implementation for SLI on there boards currently, this would be a really good investment.

By Dfere on 8/20/2008 2:33:06 PM , Rating: 2
Wasn't there a video card and chip vendor, years ago who had an alternative to the "brute force" rendering scheme, "tile based" or some such? Smart technology but it didn't help them either..... Marketing muscle seems to win out in a lot of cases.....

By James Wood Carter on 8/21/2008 5:23:20 PM , Rating: 2
If this is true that would be a relief. I never liked SLI - always felt inadequate when just using 1 card instead of 2.

By spluurfg on 8/22/2008 7:12:10 AM , Rating: 2
The groundbreaking technology is titled the HYDRA Engine. The accomplishment of the engine is nothing short of unbelievable to those who follow the graphics industry.

Even if this capability is never supported, though, if Lucid can merely live up to its claims, it will be a groundbreaking development in the graphics industry.

I hope the author's getting paid for waxing lyrical to this degree.

NEC Patents?
By rupaniii on 8/22/2008 9:42:25 AM , Rating: 2
Does anyone know if Hydra implements a license of the NEC SMP Patents?

"And boy have we patented it!" -- Steve Jobs, Macworld 2007

Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
No More Turtlenecks - Try Snakables
September 19, 2016, 7:44 AM
ADHD Diagnosis and Treatment in Children: Problem or Paranoia?
September 19, 2016, 5:30 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
Automaker Porsche may expand range of Panamera Coupe design.
September 18, 2016, 11:00 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki