backtop


Print 85 comment(s) - last by someguy123.. on Aug 5 at 10:10 PM


  (Source: Heavy)
Brilliant game developer says Microsoft cloud computing does not appear to give the Xbox One a significant edge

Gaming legend and id Software Technical Director/cofounder John Carmack at his QuakeCon 2013 keynote offered up his thoughts on Microsoft Corp.'s (MSFT) Xbox One and Sony Corp.'s (TYO:6758) PlayStation 4, which will be waging war this holiday season for dominance in the eight generation of home video game consoles.

He led off by saying that he hadn't done enough head to head benchmarks to make direct comparisons between the consoles.  But based on his experience (Mr. Carmack still remains actively involved in programming top id Software titles and pioneering new graphics techniques) he says he believes they're "very close" in capabilities and that "they're both very good".

In some ways this comparison is bad news for Microsoft as it calls into queston the company's claims that its console is five times as powerful when connected to the cloud as when processing offline (if anyone would fully leverage the Xbox One's full potential, it would likely be Mr. Carmack).

id Software

Also bad news for Microsoft is Mr. Carmack's dour assessment of the Kinect sensor.  Aside from concerns regarding "always on" U.S. National Security Agency (NSA) spying (Microsoft has a major voluntary data sharing agreement with the NSA, reportedly), Mr. Carmack offers criticism of the controls themself, stating, "[The Kinect 2 is] kind of like a zero-button mouse with a lot of latency on it."

The PS4 appears to enjoy a moderate lead in preorders over the Xbox One.

You can watch Mr. Carmack's full keynote below, via the YouTube:


For the unitiated, you may ask why listen to Mr. Carmack.  Well, he coded a hardware-optimized build of Wolfenstein 3D for the iPhone in 4 days, when it was estimated to take a full team of programmers two months to perform a basic (unoptimized) port.

Source: Kotaku



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Not actually true, though, is it?
By Mint on 8/3/2013 12:41:19 PM , Rating: 4
Textures aren't a major source of bandwidth consumption, though, because they can use lossy compression and more often than not undergo magnification (spreading texel bits over several pixels). The framebuffer can only use lossless compression, and needs far more data per pixel, especially during alpha blending.

That's why eDRAM/SRAM can make such a big impact. 1080p, 2xAA, R9G9B9E5 HDR can fit into 32MB, so the most frequency accessed data stays in high speed memory. Many games are doing deferred rendering with post-processed AA, and that allows 1080p with 12 bytes per pixel (on top of Z) just in the sRAM.

And in case anyone is going to mention 4k support, you'd have to really dumb down the graphics load per pixel for these GPUs to handle that niche market, so it's a meaningless point.


By Mitch101 on 8/3/2013 1:34:44 PM , Rating: 2
The only way I see either of them getting away with 4K is through sorcery not actual rendering it would have to be done with something like the way the foruja chipset upscaled content by comparing the line above and below and rendering the inbetween line with what it believes the data would be. Really post processing Psudo 4k. It may be able to stream 4k but not game in true 4k without sacrificing a lot.


RE: Not actually true, though, is it?
By retrospooty on 8/3/2013 1:49:33 PM , Rating: 2
Overall as an average no, but they will slow you down more than anything when they first come into view. That is the main reason you need such fast ram for video cards and that is where the current consoles severely lack. I am glad ps4 has fast gddr ram. The embedded cache of the Xbox won't be enough to compensate for the huge bandwidth gap. It may not be too far behind, but it will be behind. We will have to wait and see by exactly how much.


RE: Not actually true, though, is it?
By Mint on 8/3/2013 6:46:15 PM , Rating: 4
quote:
Overall as an average no, but they will slow you down more than anything when they first come into view.
That's completely wrong. When things come first into view, that depends on bandwidth/latency from storage media to RAM.

Current consoles have only 512MB of memory. That's why they lack in this area, as they can't preload enough into memory. It has nothing to do with slow RAM. Both the PS4 and XB1 have 8GB of RAM and BR drives. They'll be equal at loading objects into view.
quote:
The embedded cache of the Xbox won't be enough to compensate for the huge bandwidth gap.
It definitely will be:
http://www.eurogamer.net/articles/digitalfoundry-x...

I've done my share of graphics programming and worked at ATI at one point as well. If you can offload all framebuffer traffic away from memory, you save well over half of bandwidth needs, and sometimes 90%+ in the most bandwidth-heavy rendering operation: alpha blending.


RE: Not actually true, though, is it?
By retrospooty on 8/3/2013 7:08:58 PM , Rating: 2
Have you worked on FPS games with massive high res textures or smaller games with low res textures designed to fit into the low end systems? everything I have ever read on this is that video RAM bandwidth is a big problem for these large textures and that is one of the main reasons why it needs to be faster and faster with every generation. I don't have personal experience with it just what I have read over the years. as far as the system's being equal, I certainly hope they are. But regardless of what Carmac said (and by the way he did mention that he didn't have a chance to actually test anything at all) most devs and h/w engineers are quite sure they are not, including Microsoft who just bumped up to speed to try and make up for being behind. Either way we'll have to wait and see when real production systems hit the market


RE: Not actually true, though, is it?
By Mint on 8/4/2013 7:43:26 PM , Rating: 3
quote:
everything I have ever read on this is that video RAM bandwidth is a big problem for these large textures
Show me where you have read this, because it's just plain wrong.

I haven't coded FPS engines myself, but at ATI I have done advanced performance analysis of AAA games, looking at hardware counters on the chip to determine where the bottlenecks are.

The vast majority of cases where texture bandwidth becomes significant is when you have thrashing (rare, usually due to developer laziness) or when you're doing framebuffer post-processing, which is perfect for eDRAM/SRAM in consoles.


By Strunf on 8/5/2013 10:08:20 AM , Rating: 1
I have read that when using Skyrim with many mods a certain card (I forget which one) that had part of its memory on much lower bandwidth than the rest was having some performance drop the moment Skyrim used that lower bandwidth memory.

Anyways your theory could be easily tested by underclock the memory of a graphics card, if what you say is true up to a certain point there should be no performance drop by a underclocking the memory.

From what I've read overclocking your memory will increase your performance and that wouldn't be the case if there was a bottleneck elsewhere.


By goatfajitas on 8/5/2013 12:07:45 PM , Rating: 1
That is becasue they arent using very high resolution textures. They are dumbing it down to the current console level, which is very low res textures that dont slow anything down.


"This week I got an iPhone. This weekend I got four chargers so I can keep it charged everywhere I go and a land line so I can actually make phone calls." -- Facebook CEO Mark Zuckerberg














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki