backtop


Print

Microsoft explains why it picked ESRAM/DDR3 over GDDR5; virtualization benefits; and why not all game are 1080p

The Xbox One may be 1080p "capable", but that doesn't mean that all Xbox One games will render at 1080p according to a recent interview by Eurogamer.  And that's a good thing according to lead Xbox One engineers Andrew Goossen (software) and Nick Baker (Hardware), both of which answered the interview questions about Microsoft Corp.'s (MSFT) new console, which launches on Nov. 22.

I. GPU Tradeoffs, 720p vs. 1080p Resolution Gaming Explained

Comments Mr. Baker:

We've chosen to let title developers make the trade-off of resolution vs. per-pixel quality in whatever way is most appropriate to their game content. A lower resolution generally means that there can be more quality per pixel. With a high-quality scaler and antialiasing and render resolutions such as 720p or '900p', some games look better with more GPU processing going to each pixel than to the number of pixels; others look better at 1080p with less GPU processing per pixel.

Microsoft also revealed that it's mandating at least 2x anti-aliasing in all its titles, a guideline that had not yet received significant media attention.  Additionally, Mr. Baker and Mr. Goosen detail in the interview how the Xbox One operating system, firmware, and hardware are designed to allow system apps (e.g. a messaging client) to run alongside games at minimum cost.
Xbox One summary
Microsoft says its hardware design is based on its intent for the Xbox One to be a media hub, capable of running apps optimally alongside games.

 

The GPU powering the Xbox One, Microsoft clarified in the interview, will be Sea Islands family design from Advanced Micro Devices Inc. (AMD).  Mr. Goosen says that this is the same family used inside Sony Corp.'s (TYO:6758) PlayStation 4.  Microsoft's design uses 12 compute units (CUs) versus 18 CUs in the PS4 chip; hence Microsoft's design is somewhat analogous to Bonaire (the GPU in the Radeon HD 7790) while Sony's is similar to Pitcairn (the GPU in the Radeon HD 7850).

Xbox One
[Image Source: Heavy]

But that comparison is slightly misleading, according to Microsoft, as the Xbox One's GPU is higher clocked than the PS4's.  Comments Mr. Goosen:

We actually saw on the launch titles - we looked at a lot of titles in a lot of depth - we found that going to 14 CUs wasn't as effective as the 6.6 per cent clock upgrade that we did. Now everybody knows from the internet that going to 14 CUs should have given us almost 17 per cent more performance but in terms of actual measured games - what actually, ultimately counts - is that it was a better engineering decision to raise the clock. There are various bottlenecks you have in the pipeline that [can] cause you not to get the performance you want [if your design is out of balance].
Xbox One SoC GPU
Xbox One's GPU stack model is complex, leading it to be limited by different hardware factors in different scenarios.

In other words, Microsoft believes that by trading a 17 percent increase in CU count for a 6.6 percent clock bump, it will actually achieve better performance.  One important thing to keep in mind is that in the interview Microsoft reveals it reserves a 10 percent time slice cut of the GPU for system apps that can run side-by-side with a game.  Given that Sony does not appear to do this, this may explain why Sony considered the optimal balance 14 CU with a lower clock.

II. Xbox One -- CPU Bound (Typically), Optimized for Media

Mr. Goosen explains this in even more intricate detail, stating that the Xbox One is generally not ROP (render output unit) bound, but rather limited by a variety of factors.  He states:

The goal of a 'balanced' system is by definition not to be consistently bottlenecked on any one area. In general with a balanced system there should rarely be a single bottleneck over the course of any given frame - parts of the frame can be fill-rate bound, other can be ALU bound, others can be fetch bound, others can be memory bound, others can be wave occupancy bound, others can be draw-setup bound, others can be state change bound, etc. To complicate matters further, the GPU bottlenecks can change within the course of a single draw call!

...

If we had designed for 2D UI scenarios instead of 3D game scenarios, we might have changed this design balance. In 2D UI there is typically no Z-buffer and so the bandwidth requirements to achieve peak fill-rate are often less.
Xbox One SoC
The Xbox One SoC encompasses AMD Jaguar CPU cores, a Sea Islands GPU, ESRAM, and custom processor silicon.

Another interesting point raised by the Microsoft engineers in the Q&A is that the GPU resourcs are shared between the system apps and the game app via virtualization.  Mr. Goosen comments, "I think this is actually the first big consumer application of a GPU that's running virtualised."
Xbox One CPU
The Xbox One runs on 64-bit Jaguar CPU cores.

The approach allows the system to receive regular updates (following the traditional Microsoft patch, service pack model), while not breaking retail game titles or requiring an immediate update to game titles.

The pair also reveals fresh details about the 15 specialist digital signal processors (DSPs) and standard processors within the system-on-a-chip, commenting:

On the SoC, there are many parallel engines - some of those are more like CPU cores or DSP cores. How we count to 15: [we have] eight inside the audio block, four move engines, one video encode, one video decode and one video compositor/resizer.

The audio block was completely unique. That was designed by us in-house. It's based on four tensilica DSP cores and several programmable processing engines. We break it up as one core running control, two cores running a lot of vector code for speech and one for general purpose DSP. 
Xbox One ASP
The Xbox one uses multiple specialist sub-processors within the SoC.


III. 32 MB ESRAM + 8 GB DDR3 (Xbox One) Versus Pure 8 GB GDDR5 (PS4)

One final issue tackled was the use of 8 GB of DDR3, supplemented by 32 MB of ESRAM (embedded static RAM -- a NAND based solution), versus 8 GB of GDDR5 in the PS4.  Microsoft says that the decision to pick ESRAM versus eDRAM (such as the embedded memory onboard Intel Corp.'s (INTCHaswell Core Series system-on-a-chip designs) was purely based on what was on hand.

Xbox One SoC
The Xbox One's primary memory is 8 GB DDR3 -- slower than the 8 GB GDDR5 found in the PS4 -- but Microsoft has a trick up its sleeve (ESRAM).

Microsoft says its ESRAM/DDR3 solution offers nearly the same memory performance as the more speculative GDDR5 solution that Sony went with, while being a more natural evolution from the Xbox 360's eDRAM/GDDR3 memory mix.
Xbox One SoC
The SoC design is relatively complex versus Sony's design which is thought to feature less special purpose processors and no embedded memory.  The Xbox One can address 32 MB of onboard embedded memory, which allows it to achieve almost the bandwidth of GDDR5 with DDR3.

Mr. Baker states:
 
First of all, there's been some question about whether we can use ESRAM and main RAM at the same time for GPU and to point out that really you can think of the ESRAM and the DDR3 as making up eight total memory controllers, so there are four external memory controllers (which are 64-bit) which go to the DDR3 and then there are four internal memory controllers that are 256-bit that go to the ESRAM. These are all connected via a crossbar and so in fact it will be true that you can go directly, simultaneously to DRAM and ESRAM.

Over that interface, each lane - to ESRAM is 256-bit making up a total of 1024 bits and that's in each direction. 1024 bits for write will give you a max of 109GB/s and then there's separate read paths again running at peak would give you 109GB/s. What is the equivalent bandwidth of the ESRAM if you were doing the same kind of accounting that you do for external memory... With DDR3 you pretty much take the number of bits on the interface, multiply by the speed and that's how you get 68GB/s. That equivalent on ESRAM would be 218GB/s. However, just like main memory, it's rare to be able to achieve that over long periods of time so typically an external memory interface you run at 70-80 per cent efficiency.

Directionally, Microsoft says each operation (read, write) is capped at 109 GB/s but by mixing reads and writes, real world performance of 130-140 GB/s can be achieved.  This isn't signficantly worse than the 176 GB/s theoretical performance of the PS4 over its 256-bit bus.

Microsoft Halo
Microsoft is confident its hardware approach will deliver good value to gamers and developers alike.

Overall Microsoft details clarify how two consoles with slightly different hardware designs -- the PS4 and Xbox One -- can still be roughly "neck and neck" as famed developer John Carmack claimed at his recent Quakecon event.  It sounds like Microsoft took on a more challenging virtualization bid, but might profit off greater flexiblity and utility for its console, while Sony picked more of a speculative hardware target (basing the PS4 on 4 Gb (gigabit) GDDR5 modules), which may pay off in terms of higher bandwidth/lower hardware costs.

Source: Eurogamer





“Then they pop up and say ‘Hello, surprise! Give us your money or we will shut you down!' Screw them. Seriously, screw them. You can quote me on that.” -- Newegg Chief Legal Officer Lee Cheng referencing patent trolls













botimage
Copyright 2017 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki