backtop


Print 42 comment(s) - last by Jkm3141.. on Aug 7 at 6:17 PM

Two chipsets, two graphics cores

DailyTech has received a July 2006 Intel specification update on G965 and Q965 Express chipsets that reveals final details of Intel’s GMA X3000 and GMA 3000 integrated graphics cores. Intel originally intended for the G965 and Q965 Express chipsets to have the same graphics core but have since changed its mind. The result is two separate graphics core—GMA X3000 for G965 Express and GMA 3000 for Q965 Express. As Intel is appealing to the mainstream consumer with G965 Express the GMA X3000 graphics core will have greater graphics processing capabilities.

G965 Express with its GMA X3000 graphics core will support DirectX 9c, DirectX 10 and OpenGL 1.5 features. The supported features include:
  • Hardware vertex shader model 3.0
  • Hardware pixel shader model 3.0
  • 32-bit and 16-bit full precision floating point operations
  • Up to 8 multiple render targets
  • Occlusion Query
  • 128-bit floating point texture formats
  • Bilinear, trilinear and anisotropic mipmap filtering
  • Shadow maps and double sided stencils
There’s no mention of the amount of vertex or pixel shaders available on GMA X3000 graphics cores. However, the shaders are fully programmable which can be adapted to varying amounts of vertex or pixel shaders. As previously reported the GMA X3000 will have a 667 MHz graphics core clock with support for high dynamic range and Intel Clear Video Technology for enhanced video playback.

Q965 Express chipsets will receive a less powerful graphics core in the form of the GMA 3000. GMA 3000 will meet the minimums to support Microsoft’s upcoming Windows Vista Premium with Aero Glass interface with support for DirectX 9c and OpenGL 1.4 plus. The following features are supported:
  • Software vertex shader model 2.0/3.0
  • Hardware pixel shader model 2.0
  • 32-bit and 16-bit fixed point operations
  • Up to 8 multiple render targets
  • Occlusion query
  • 128-bit floating point texture formats
  • Bilinear, trilinear and anisotropic mipmap filtering
  • Shadow maps and double sided stencils
With the exception of hardware pixel shader model 2.0 support the GMA 3000 graphics core has similar specifications as the outgoing GMA 950 graphics core. Since Intel is catering Q965 Express chipsets towards business users as part of its vPro platform initiative it doesn’t exactly need gaming capable graphics power.

Availability of G965 Express based products is expected sometime this month while Q965 Express based products will have a formal launch in early September.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Comparison
By AaronAxvig on 8/5/2006 3:51:16 PM , Rating: 2
Any ideas on approximately what older NVidia or ATI card this is comparable to?




RE: Comparison
By Tsuwamono on 8/5/2006 4:33:00 PM , Rating: 1
Radeon 7000? lol probably something higher but i would still take an ATI or Nvidia chipset of an intel one..


RE: Comparison
By obeseotron on 8/5/2006 5:52:35 PM , Rating: 3
It's probably not really comparable to any older nvidia or ati chip. The real graphics companies would never make a chip so simultaneously feature rich and lacking in performance (the FX5200 could be argued as an exception). This thing is probably just fast enough for Aero, nothing more. I'd be shocked if this thing matched a 6200 Turbocache in performance.


RE: Comparison
By coldpower27 on 8/5/2006 10:17:22 PM , Rating: 3
Assuming it can double the last generation GMA950 performance, it should be able to just reach Geforce 6200 TC-16 level performance.

http://www.anandtech.com/video/showdoc.aspx?i=2427...

I however expect the upcoming RS600 to be a more powerful solution.

As you can see from above, even with only 2 pipelines vs 4, the Xpress 200 IGP is at worst on par and at best twice as fast as Intel's integrated solution, I don't expect this to change with the X700 based IGP.


RE: Comparison
By ltcommanderdata on 8/5/2006 10:42:54 PM , Rating: 3
I'm actually thinking that with proper driver support the GMA X3000 could easily be the fastest IGP. Based on the diagrams Intel has provided, indications point to the GMA X3000 having 8 unified shaders. This makes sense intuitively too since the extra room from the 130nm to 90nm transition could be used to double shaders from the 4PS in the GMA950. Now both ATI's X700 based IGP and nVidia's 7300 based IGP look to have 4PS + 2VS. In contrast, the GMA X3000 has 8 unified shaders. Also, it's clocked at 667MHz which is very aggressive although I haven't heard how high ATI and nVidia are clocking their solution. In any case, from a hardware perspective, Intel's solution is definitely superior.

All it needs is Intel to take full advantage of it with drivers. If done properly, the GMA X3000 should be able to compete with the X1300 HM. The X1300HM is only clocked at 400MHz, has 4PS, 2VS, and has 6.4GB/s of memory bandwidth. In comparison the GMA X3000 is clocked at 667MHz, has 8 unified shaders, and has 10.7GB/s of bandwidth with dual channel DDR2 667 memory. Granted that it's bandwidth is shared, but it has a lot more available to begin with than the X1300HM. The GMA X3000 can also take advantage of Fast Memory Access in the G965 and it's shaders are also multithreaded to keep processing in the event that a thread stalls (probably a development of Hyperthreading). These features should help offset the disadvantages of shared memory. I can also see the GMA X1300 taking on the vanilla X1300 which is only slocked slightly higher than the X1300 HM at 450MHz and has 8GB/s of memory bandwidth. Again it all depends on whether the drivers realize the potential of the GMA X3000, but the potential is certainly there.


RE: Comparison
By coldpower27 on 8/5/2006 11:43:31 PM , Rating: 3
Yes just as how from a hardware perspective, the GMA950 is more powerful then the Xpress 200 IGP. With the 4PS vs 2PS argument there.

It doesn't matter from a theoretical standpoint which one is quicker, what matters is how the solution performs in reality not in theory. Judging from Intel's past attempts at IGP, performance has been anemic at best in comparison to Nvidia/ATI based solutions.

I certainly hoping this next attempt is improved. There are two big IFs here IF Intel can gets it driver team for it's IGP together and IF they have increase their IPC on their pipelines to the levels of the Nvidia/ATI counterparts, then Intel has a fighting chance. If not it's doubtful performance will be very high.

Well it depends what you will be comparing these solutions to, the 7300 LE and X1300 HM are adequate comparisons, from a theoretical standpoint the X3000 should be quicker but I expect the X1300 HM and 7300 LE to win out in the end.

7300 LE has a a tad more memory bnadwidth, then the X1300 HM, with 667MHZ vs the 500MHZ utilized on the X1300 HM, though both have 64Bit Memory Interfaces.

You specs aren't entirely correct.

X1300 HM
450MHZ Core
500MHZ Memory on 64Bit = 4.0GB/s of Isolated Memory Bandwidth

And the 7300 LE has access to 5.3GB/s of Isolated Memory Bandwidth.

Also you got to keep in mind the G965 Chipset will have access to Dual Channel DDR2-800 for 12.8GB/s of Shared Bandwidth between the processor and other peripherials.

Both the 7300 LE and X1300 HM, have their own isolated Memory buffers of 128MB capacity, in addtion to being able to utilize some system memory, while the X3000 has to be completely reliant on system memory alone. Comparing shared bandwidith vs isolated bandwidth is hardly apples to apples doesn't matter if there is alot more of it, if most of it is meant for the processor itself.

Your doing alot of speculating yourself, it is very unlikely for IGP solutions to reach performance levels of even the current generation lowest end discrete cards, if they do that would be quite a monummental feat in itself. I am expecting Geforce 6200 TC-16 level performance as a starting point, any higher is basically icing on the cake. Look at the Xpress 200 vs the X300 SE, which is basically the lowest end possible R3xx derived card to my knowledge, the X300 SE is undoubtablely superior.

The X3000 certainly has alot of potential, but whether on not Intel executes on this particular IGP solution is another story, as well, we don't know that actual pipeline configuration of the X3000 8 Shaders Units there isn't enough information without the TMU's as well.










RE: Comparison
By ltcommanderdata on 8/6/2006 3:22:25 AM , Rating: 3
quote:
Yes just as how from a hardware perspective, the GMA950 is more powerful then the Xpress 200 IGP. With the 4PS vs 2PS argument there.

This circumstance was slightly different though since the GMA950 didn't have any hardware VS or T&L which severely limited it's performance (to say the least) even if it had 4PS to the Xpress 200's 2PS + 1VS.

http://www.digit-life.com/articles2/mainboard/ati-...

The GMA950 actually does surprisingly well in Doom 3 and UT2004 although the other games are disappointing compared to the Xpress 200. I guess it's a matter of game dependance on VS, which the GMA950 lacks, and on driver support.

quote:
You specs aren't entirely correct.

X1300 HM
450MHZ Core
500MHZ Memory on 64Bit = 4.0GB/s of Isolated Memory Bandwidth

Your right, my specs were a bit off. I quoted a 400MHz core for the X1300HM while it is 450MHz, however, the standard RAM speed is 400MHz x2 on a 64-bit interface for 6.4GB/s of bandwidth.

http://www.beyond3d.com/reviews/powercolor/x1300/i...

Various manufacturers change their RAM speeds though so your numbers might be right for the cards you were looking at. nVidia gives their manufacturers even more flexibility and I've even seem models with 32-bit memory interfaces. Generally the 7300LEs are all over the place so it's hard to determine.

Anyways, in terms of architecture the GMA X3000 is designed around a shared memory environment so it isn't affected as much. The GMA X3000 uses PowerVR SGX technology and tiling which isn't as dependant on memory bandwidth and probably includes larger internal buffers to offset that. In constrast, the RV515 and the G72 are designed for larger amounts of memory bandwidth so they are unlikely to have the larger buffers to compensate when that bandwidth is taken away. Interestingly the PowerVR SGX architecture supports usage of HDR and antialiasing at the same time like ATI, but it'd probably be too slow to use.

In any case, you're right that we'll have to wait for the actual products and finalized drivers. Still I am hopeful since Intel looks to be taking graphics seriously for once. This is the product/architecture that AMD and ATI went to all the trouble to merge over afterall.

The GMA X3000 White Paper is here if anyone wants it:
http://download.intel.com/design/chipsets/applnots...

(On the TMU note, I'm going to bet on 4 of them. I can't believe that Intel would include less than the GMA 950. 8 unified shaders and 4 TMUs seems about right.


RE: Comparison
By IntelUser2000 on 8/6/2006 6:59:44 PM , Rating: 2
quote:
This circumstance was slightly different though since the GMA950 didn't have any hardware VS or T&L which severely limited it's performance (to say the least) even if it had 4PS to the Xpress 200's 2PS + 1VS.


quote:
The GMA X3000 uses PowerVR SGX technology and tiling which isn't as dependant on memory bandwidth and probably includes larger internal buffers to offset that.


Although the tiling based seems similar to PowerVR based architectures, I doubt its based on the latest SGX. The SGX derived ones are supposed to be on Intel's Stanwood, which is successor to 2700G multimedia accelerator used in PDA's. Stanwood is now cancelled however. It was also rumored that next generation graphics core from Intel for PDAs feature shader technology(sounds like SGX). PowerVR mentions

"Maximum effective pixel fillrate performance from 200Mpix/sec to 1200Mpix/sec @ 200MHz with even higher Z and stencil fill rate and polygon throughput from 2Mpoly/sec to 13.5Mpoly/sec @ 200MHz. Performance depends on core and configuration selected."

I highly doubt its the same thing.


RE: Comparison
By ltcommanderdata on 8/6/2006 9:11:53 PM , Rating: 2
quote:
Although the tiling based seems similar to PowerVR based architectures, I doubt its based on the latest SGX. The SGX derived ones are supposed to be on Intel's Stanwood, which is successor to 2700G multimedia accelerator used in PDA's. Stanwood is now cancelled however. It was also rumored that next generation graphics core from Intel for PDAs feature shader technology(sounds like SGX).

It makes sense that the GMA X3000 uses PowerVR SGX technology since Intel already has a license for it. The GMA 950 and before all used Zone Rendering which is an implementation of PowerVR's earlier tiling architectures. PowerVR SGX specifically adds unified shader support and this is exactly what the GMA X3000 adds over the GMA 950 so it can't just be a coincidence.

The Inquirer also reports that the G965 uses PowerVR SGX technology:
http://www.theinquirer.net/default.aspx?article=33...

Obviously it's an "Intelized" implementation, but it makes sense that PowerVR SGX is used since I can't see Intel spending the resources to develop a completely need architecture from the ground up when they already have a license to a perfectly good one.


RE: Comparison
By IntelUser2000 on 8/6/2006 7:03:59 PM , Rating: 2
quote:
This circumstance was slightly different though since the GMA950 didn't have any hardware VS or T&L which severely limited it's performance (to say the least) even if it had 4PS to the Xpress 200's 2PS + 1VS.


Forgot to reply to the above.

According to Sudhian, Geforce 2 MX/Radeon 64 T&L cores are actually slightly less powerful than Athlon XP 2500+ CPU. In order for GMA X3000's hardware T&L to be substantially better than GMA950's software T&L, it will have to beat Pentium D 960 running at 3.6GHz. The counter-argument may be that the CPU is also doing other tasks and the dedicated hardware T&L in X3000 is faster, but I doubt it'll be better on the GMA X3000 if the hardware T&L fails to outperform the Pentium D 960.


RE: Comparison
By coldpower27 on 8/7/2006 12:40:20 AM , Rating: 2
quote:
This circumstance was slightly different though since the GMA950 didn't have any hardware VS or T&L which severely limited it's performance (to say the least) even if it had 4PS to the Xpress 200's 2PS + 1VS.
quote:


http://techreport.com/reviews/2004q4/radeon-xpress...

To be fair the Xpress 200 IGP only seems to offload transform and lighting capabilities, the rest of the vertex shader functions are handled by the system processor. So it's not a Full 1 VS Unit.

I would hardly say that the 7300 GS or X1300 Pro are designed for large amounts of bandwidith. Certainly not G72 as it's highest part is only ~ 6.4GB/s or so. Maybe RV515 a tad, but these are low end cards, they are designed to do as well as possible without bandwidth. I dunno I was going by the 7300 LE's on Newegg, and it seems they are still on 64Bit Interfaces with either 533MHZ/667MHZ which isn't too much of a drop from the 7300 GS official 810MHZ.

Regarding ATI/AMD it will remain to be seen, if they themselves can give us their unified platform so to speak in 2008, I welcome their attempts on the issue.

Like I said, I am not giving Intel the benefit of the doubt, given how poorly their past 2 attempts have performed.





Questions
By ltcommanderdata on 8/5/2006 8:56:31 PM , Rating: 2
quote:
With the exception of hardware pixel shader model 2.0 support the GMA 3000 graphics core has similar specifications as the outgoing GMA 950 graphics core.

I believe that the GMA950 had 4 hardware PS2.0 so the only difference between the the GMA3000 and the GMA950 is the clock speed from 400MHz to 667MHz.

Maybe you can answer something for me. If the 965 series uses unified shaders, what's the point of locking the shaders in PS mode for the Q965 and Q963? I don't see it offering any benefits to the user or Intel just disadvantages. For instance if there were 8 unified shaders having them as 8 PS doesn't increase yields or reduce power or heat compared to allowing dynamic shader allocation. All it does is slow down the system.

The second question I have is that the 946GZ uses a GMA3000 at 667MHz too. The only difference with the GMA950 is the higher clock speeds because of the 90nm die shrink. So what's the difference between the GMA3000 in the 946GZ (a die shrink of the 945G) and the GMA3000 in the Q965 and Q963? I was thinking that maybe they are the same, but that makes no sense. That means that there are actually 2 completely different chips in the 965 series with integrated graphics. I can't see Intel spending the resources to develop 2 NBs with IGPs in parallel instead of their usual design 1 and cut features to make others. The Q965 has more in common with in common with the G965 than the 946GZ too since the Q965 has FMA and all the other new features of the G965. This means that the Q965 isn't just a relabeled 946GZ but a separate development. It's all very wierd.

BTW, you're a little behind on this story. HKEPC reported this on July 27th.

http://www.hkepc.com/bbs/itnews.php?tid=638462&sta...

Not pointing fingers or anything, just pointing this out.

Also The Inquirer has reported on the GMAX3000's current poor performance, but that's because the new drivers aren't ready yet. Right now they are using 14.21, but hardware PS3.0 and T&L isn't added until 14.24, and hardware VS2.0 isn't added until 14.26. Hopefully Intel actually spends time on these drivers since that's usually their weak point and the GMAX3000 certainly has a lot of potential. If it ships in August it'll be the first DX10 chip beating the G80 and the first PC chip with unified shaders beating the R600.




RE: Questions
By Xavian on 8/5/2006 10:17:30 PM , Rating: 2
yes, but ofcourse it'll come no-where near in terms of performance of the G80 and R600 cards. Intel graphics have allways been ones for tonnes of features but not performance and honestly i can't see that changing at all.


RE: Questions
By Phynaz on 8/7/2006 10:09:35 AM , Rating: 2
A $30 integrated chipset will not come anywhere near the performance of a $600 dedicated GPU?

People, we have a new master of the obvious here.


RE: Questions
By defter on 8/6/2006 10:09:58 AM , Rating: 2
quote:
If it ships in August it'll be the first DX10 chip


Check the dailytech summary. GMAX3000 is DX9 chip (Shader model 3.0 only).


RE: Questions
By Warren21 on 8/6/2006 1:25:04 PM , Rating: 2
You don't need SM 4.0 to be a DX10 chip. SM4.0 is simply a shader model spec that is being released with it. It says in the article "supports DX9c, DX10 and SM3.0"

Other cards like the GeForce 6 series don't "support DX10 like every card released in the past few years" either. They are DX9. No card released to this day supports DX10. Though some DX10 games will have a roll-back feature to play on DX9 cards, the games then run in DX9 mode and are no longer DX10 games. The next-gen games supporting DX10 can run on older DX9 cards in DX9, DX10 doesn't run on DX9 hardware. Get it straight.


RE: Questions
By Warren21 on 8/6/2006 1:25:47 PM , Rating: 1
*and Open GL 1.5"


RE: Questions
By ltcommanderdata on 8/6/2006 2:39:38 PM , Rating: 2
Yeah, I replied to a similar comment above.

http://www.hkepc.com/bbs/itnews.php?tid=638462&sta...

quote:
But our sources have some more details. Besides these, Direct X10 and Shader Model 4.0 are also supported for G965. The G965 Direct X10 driver will be ready as the same time as Direct X10 API for Vista released.

Essentially, the unified shaders already support SM4.0 and DX10, but just require proper drivers and a BIOS update to activate those features. The unified shaders can easily support GS mode in addition to PS and VS. Intel's just waiting for Vista to be released before they activate DX10 although they'll probably have something for the Vista Betas and RCs before that.


RE: Questions
By defter on 8/7/2006 5:42:10 AM , Rating: 2
quote:
You don't need SM 4.0 to be a DX10 chip. SM4.0 is simply a shader model spec that is being released with it. It says in the article "supports DX9c, DX10 and SM3.0"

Other cards like the GeForce 6 series don't "support DX10 like every card released in the past few years" either. They are DX9. No card released to this day supports DX10. Though some DX10 games will have a roll-back feature to play on DX9 cards, the games then run in DX9 mode and are no longer DX10 games. The next-gen games supporting DX10 can run on older DX9 cards in DX9, DX10 doesn't run on DX9 hardware. Get it straight.


You are very confused:
- all modern cards will have DX10 drivers, meaning that they will work DX10 games, just like my GeForce3 works with DX9 games.
- The SM 4.0 is a new feature of DX10. Thus if we are talking about the chip that supports all DX10 features, SM 4.0 is a must have feature.

If GMAX3000 has only SM 3.0 just like GeForce 6xxx, then why do you think that GMAX3000 is "DX10 chip" while GeForce 6800 is "DX9 chip"?


RE: Questions
By ltcommanderdata on 8/7/2006 5:52:32 PM , Rating: 2
Currect me if I'm wrong, but I thought that DirectX 10 was a complete departure from DirectX 9 and before. For one, DX10 doesn't offer any backwards compatibility. The reason is that DX10 no longer offers support for compatibility bits which means that graphics cards can no longer tell games what feature they support and so what features the game can enable. This was the primary reason why the GMA950 could claim support for DX9 since it could tell the game which features it can't run. Instead, Microsoft will define a complete supported feature set in each version/update of DX10. In order for you to claim DX10 compatibility you have to support all the features Microsoft defines. There can be no half-assed DX10 graphics card implementation, it's either all or nothing. This leads me to believe that when Intel saids DX10 support it must be full support, because there is no partial support. This is the same reason why you don't see ATI and nVidia claiming any of their current graphics cards support DX10 if it's simply a driver thing. If it's just a driver then ATI and nVidia would be all over it already in their marketing.


RE: Questions
By IntelUser2000 on 8/6/2006 6:41:42 PM , Rating: 1
quote:
With the exception of hardware pixel shader model 2.0 support the GMA 3000 graphics core has similar specifications as the outgoing GMA 950 graphics core.


Please fix the errors, DailyTech editor. The amount of misinformation spread in DailyTech is so much its not even funny. GMA 950 DOES support hardware Pixel Shader model 2.0. The only difference between GMA 950, GMA3000 is that GMA3000 MIGHT have a higher clock rate. All else is same according to current information provided to public.

To all those wondering: 946GZ and Q965 both use GMA 3000, so they are the same graphics core.

DirectX 10 and PS 10/VS 10 is supported. It's reported that it will be supported when DX10 specfications are finalized(and probably available to download).


bad news for gaming
By LumbergTech on 8/5/2006 8:48:14 PM , Rating: 1
once again intel is squashing PC platform as a gaming machine by releasing old technologies and pretending like they are new..people are always going to buy these cheaper computers (the weaker of the intel chipsets) or at least they will be uninformed about the differences..




RE: bad news for gaming
By ltcommanderdata on 8/5/2006 8:58:10 PM , Rating: 2
Are you telling me that unified shaders are an old technology? Point out a PC graphics card on the market right now that has that technology. You can also point out a chip on the market right now that is DirectX 10 ready like the GMAX3000 is.


RE: bad news for gaming
By fxnick on 8/5/2006 11:33:52 PM , Rating: 2
it was just too much for them to make it opengl 2 compatable...what video card dosent have that


RE: bad news for gaming
By bersl2 on 8/5/2006 11:51:53 PM , Rating: 2
You know, you [i]can[/i] implement OpenGL features in software. No, it doesn't help with games, but software fallback is better than nothing.


RE: bad news for gaming
By defter on 8/6/2006 6:50:06 AM , Rating: 3
quote:
You can also point out a chip on the market right now that is DirectX 10 ready like the GMAX3000 is.


GMAX3000 doesn't support Shader model 4.0 that is introduced with DirectX 10. While it certainly works with DirectX 10 (just like any other chip released in the last few years), it only supports Shader model 3.0 like over two years old GeForce 6xxx series.


RE: bad news for gaming
By ltcommanderdata on 8/6/2006 2:33:57 PM , Rating: 3
quote:
GMAX3000 doesn't support Shader model 4.0 that is introduced with DirectX 10. While it certainly works with DirectX 10 (just like any other chip released in the last few years), it only supports Shader model 3.0 like over two years old GeForce 6xxx series.

Since you seem uninformed, I'll point you to an article that more explicitly states DirectX 10 support.

http://www.hkepc.com/bbs/itnews.php?tid=638462&sta...

quote:
But our sources have some more details. Besides these, Direct X10 and Shader Model 4.0 are also supported for G965. The G965 Direct X10 driver will be ready as the same time as Direct X10 API for Vista released.

The GMA X3000 was designed with DX10 in mind which was why it was designed with flexible unified shaders. Intel will be releasing new drivers and possibly a BIOS update for the GMA X3000 which will activate SM4.0 support. The unified shaders currently only operate in PS and VS mode (actually only PS mode with the current v14.21 drivers, but the shipping driver v14.26 will have VS and T&L activated), but with proper driver support the unified shaders can easily support PS, VS, and GS modes. So yes, the GMA X3000 is DX10 ready, it just needs the proper drivers, but since Vista isn't going to be released anytime soon, the fact that drivers aren't available right now to activate those features isn't such a big deal.


RE: bad news for gaming
By defter on 8/7/2006 5:47:46 AM , Rating: 2
quote:
Since you seem uninformed, I'll point you to an article that more explicitly states DirectX 10 support.


I trust Intel official data more than some rumours:
http://www.intel.com/products/chipsets/G965/index....
"Intel® Graphics Media Accelerator 3000

3D enhancements enable greater game compatibility with support for Hardware T&L, and improved realism with support for Microsoft DirectX* 9.0c Shader Model 3.0, OpenGL* 1.5, and floating point operations. Intel graphics technology also support the highest levels of the Microsoft Vista* Aero experience."

And:
http://www.intel.com/products/chipsets/gma3000/gma...

It doesn't mention SM 4.0 anywhere.


Important to note
By hstewarth on 8/6/2006 6:22:02 PM , Rating: 2
I think before people jump to conclusions it is very important to note the following quote from article.

quote:
Since Intel is catering Q965 Express chipsets towards business users as part of its vPro platform initiative it doesn’t exactly need gaming capable graphics power.


This means to me that system with these chipsets on them are designed to compete with high end gaming system ( Intel or AMD or anything else ).

I use different Intergrated GPU on my systems - just in case my video card goes wrong, I can still boot with the system and do something with the files if necessary.




RE: Important to note
By hstewarth on 8/6/2006 6:23:58 PM , Rating: 2
( Intel or AMD or anything else )

should have been

( NVidia or ATI/AMD or anything else )

I guess this AMD/ATI thing is really confusing the situation :)


Race to DX10?
By AggressorPrime on 8/6/2006 6:29:50 PM , Rating: 1
So who will win the race to DX10: Intel or nVidia? Hopefully nVidia can launch their G80 before Intel launches this or it might look a little bad.

Noob says, "My Intel on-board graphics system can play DX10 games while your $1100 Quad SLI system can't. :P"




RE: Race to DX10?
By ltcommanderdata on 8/6/2006 9:17:43 PM , Rating: 2
quote:
So who will win the race to DX10: Intel or nVidia? Hopefully nVidia can launch their G80 before Intel launches this or it might look a little bad.

Intel should win the race since their launch window for the G965 is August/September.

http://www.hkepc.com/bbs/itnews.php?tid=610998

quote:
The next generation of Intel G965 graphic chipset has been officially named to Intel Graphics Media Accelerator X3000, expected to be released between 14th Aug to 18th Sept.

The G965 is already listed on Intel's website.

http://www.theinquirer.net/default.aspx?article=32...

The G80 by contrast is scheduled for September/October. nVidia will probably wait for ATI's refresh with the R580+, RV570, etc. which is expected to launch late August/early September and then launch their superior products after.


Thanks for clearing that up
By sh3rules on 8/5/2006 4:53:42 PM , Rating: 2
quote:
Since Intel is catering Q965 Express chipsets towards business users as part of its vPro platform initiative it doesn’t exactly need gaming capable graphics power.


Wouldn't want anyone to think those cores could handle a game.




you'll need it...
By judasmachine on 8/7/2006 5:50:51 AM , Rating: 2
to run vista with aero glass on an off the shelf model.




By Targon on 8/7/2006 8:01:57 AM , Rating: 2
There is a big difference between DX 10 support(meaning the drivers will work with DX 10), and having hardware acceleration of DX 10 features. This is something that many people don't seem to realize has been going on with Intel, ATI, NVIDIA, and SiS graphics for a long time now.

Any company can make new drivers for old cards that work with DirectX 10, that's not the problem. The issue really comes down to providing either hardware support or software support for the new DX 10 functions.

In most games for example, you can run them on OLD video cards with DirectX 9 installed and the latest drivers. That doesn't mean that the cards will accelerate DirectX 9 code, so the performance is VERY poor. Anything lower than a Radeon 9500 from ATI, or Geforce FX 5200 won't have DirectX 9 support in hardware, and as a result, anything that uses DirectX 9 functions will be VERY slow. In many cases, games and applications also test the hardware features and will disable certain features that the video cards just can't handle(which is why some features are disabled and can't be turned on).

So, just because Intel is going to provide software support for DX 10 doesn't mean that their new GPU will have DX 10 acceleration in hardware.

Now, it could be said that if ATI and NVIDIA would invest the time in drivers, we could see DX 10 feature support for most older cards by having those features be supported in the drivers to compensate for not having those features in the hardware, but that would give many people an excuse for not buying new video cards. A Radeon 9800 pro may not have unified shaders in hardware for example, but drivers could fool the software into thinking the video card supports it if properly coded. SM 4.0 support could be added to older cards in the same way, though performance would be very poor.

By having the drivers do more on older video cards, a top of the line card from 2 years ago might slow down a lot on DX 10 stuff it doesn't accelerate in hardware, but at the same time, it would still be at least as fast as the bottom of the line DX 10 accelerated cards we will see released in November/December and may still provide decent support for the other functions.




boring
By Jkm3141 on 8/7/2006 6:17:35 PM , Rating: 2
as always it will take a lot to get me excited about an intagraded GPU




Poor NVidia
By phatboye on 8/5/06, Rating: -1
RE: Poor NVidia
By unfalliblekrutch on 8/5/2006 3:40:29 PM , Rating: 2
intel has most of the integrated market anyway, I doubt nVidia will lose too much marketshare.


RE: Poor NVidia
By ksherman on 8/5/2006 3:51:04 PM , Rating: 2
indeed. I am really glad Intel is putting more and more into their integrated graphics, as it appears in such a huge majority of laptops these days. Took long enough though...


RE: Poor NVidia
By Cybercat on 8/6/2006 5:42:27 AM , Rating: 2
I think this can attributed to the upcoming release of Vista. Once Intel has an integrated solution that can run all of Vista's features it's more than likely they'll be satisfied.


RE: Poor NVidia
By hstewarth on 8/6/2006 6:14:38 PM , Rating: 2
What is up with "Poor nVidia" - this probably doesn't really hurt NVidia as much as it hurts ATI. On intergrated GPU side ATI had more Intergrate chipsets.

But I don't NVidia or ATI care much about these lower end chipsets - especially now that ATI is part of AMD. NVidia is more concentrated on higher end.

Also NVidia and Intel are in middle of dicsussion on including SLI on Intel chipsets.

I also thought that Intel was doing something with PowerVR chips - I don't think these are them.


RE: Poor NVidia
By Josh7289 on 8/5/2006 3:50:06 PM , Rating: 2
What does that mean?


"The Space Elevator will be built about 50 years after everyone stops laughing" -- Sir Arthur C. Clarke














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki