backtop


Print 120 comment(s) - last by hanznfranzen.. on May 26 at 7:42 PM

Part of the Xbox One's memory footprint gets eaten up by the OS UI leaving only 5 GB for games, Crytek points out

Crytek has focused much of its recent gaming efforts on cloud-based streaming PC gaming services.   But the top gaming engine developer remains a force in the console market as well.  It aired its thoughts in a recent interview with GamingBolt, notably its complaints about this generation of consoles "lacking" video memory hardware.
 
I. Too Little Memory?
 
Crytek’s US Engine Business Development Manager Sean Tracy comments on the current generation of consoles:

I would have to agree with the viewpoint that 8 gigs can easily be filled up, but also keep in mind that developers don’t necessarily even have access to all 8 gigs of it. For example the Xbox One retains some of the RAM for OS purposes. Since technology, as Ray Kurweil states, progresses exponentially, we will soon find that the computational requirements of games will quickly hit the ceiling of a few gigs of ram. We already had to manage quite intensely our memory usage throughout Ryse and this will be one of the limiting factors surely in this generation.

As hardware gets stronger the complexity of scenes can be increased and the dynamism within them. However, with that said it’s not the raw power alone that will allow for photo-realistic graphics but technology that intelligently scales and utilizes all that the hardware has to offer.

Crysis 3

II. The Hardware

The Xbox One has 8 Gigabytes (GB) of DDR3 -- specifically, sixteen 4-Gigabit (Gb) modules (H5TQ4G63AFR-TEC) from SK Hynix Inc. (KRX:000660), according to a teardown by iFixit.  That memory must be shared between graphics (e.g. textures, etc.), program data from running applications, and operating system data/core apps.  The lattermost category -- operating system data/core apps -- consumes 3 GB, leaving only 5 GB left for games and other apps to use .
Xbox One board
The Xbox One motherboard packs 8 GB of DDR3 (orange). [Image Source: iFixit]

The Xbox One does have 32 Megabytes (MB) of on-die "embedded static" RAM (ESRAM) on its graphics processing unit (GPU) die, which offers much faster speeds for speed-dependent applications such as alpha transparency blending.


Sony PS4 boards
The Sony PS4 packs 256 MB of system memory (green, right), and 8 GB of GDDR5 graphics memory (orange, both). [Image Source: iFixit]
 
By contrast the Sony Corp. (TYO:6758PlayStation 4 (which happens to be outselling the Xbox One) has 2 Gbit (256 MB) of memory (via a single dedicated chip) for system tasks, including the OS and core apps according to the results of an iFixIt teardown.  It also packs 16 4-Gb GDDR5 modules (K4G41325FC-HC03) from Samsung Electronics Comp., Ltd. (KRX:005930) (KRX:005935), for a grand total of 8 GB of GDDR5.

III. Other Opinions
 
How this all impacts gamemakers depends on what the developer's expectations of realism are.
 
Havok, a Dublin, Ireland subsidiary of Intel Corp. (INTC) which makes physics middleware for both consoles, had previously put things in a more diplomatic manner.  Product manager Andrew Bowell commented in a previous GamingBolt interview:

Even with the large amounts of RAM available, the developers will still find ways to use even last byte. To that end, it is always a requirement for middleware to have the smallest possible memory footprint. This is also becoming even more of a requirement as console developers look at ways to bring their title to mobile platforms.



And others still didn't seem to consider it a big deal at all.  Lighting middleware company Enlighten also spoke to GamingBolt about the memory question.  COO and cofounder Chris Doran comments:

Both of these are more than sufficient. To understand why, you have to remember than on the current generation we had around 512MB of memory to play with. This meant that developers would routinely ask us to fit our global illumination calculations into 10-20MB.

On the next-generation consoles, developers are happy to give us 10 times more space to play with – in some cases even more! And with 8 CPU cores and powerful GPUs, there is plenty of compute resource for us to make use of. It is this massive increase from current generation to next generation that really excites us – that factor of 10 increase is going to be far more significant than the additional memory on the PS4.

I’m sure as the cycle evolves, developers will look at ways to squeeze more out of the PS4, but we have a long way to go with exploiting the opportunities presented by the massive jump in resource over PS3 and 360.

(Enlighten did the lighting effects for Electronic Arts, Inc.'s (EA) Battlefield 4.)

The message seems to be that the Xbox One's unified memory scheme and heavy OS memory usage does create some limitations, but unless you're making Crysis 4, those limitations won't necessarily be dealbreakers.

Source: GamingBolt



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

huh?
By GulWestfale on 5/19/2014 12:50:50 PM , Rating: 2
so lemme get this straight... sony needs 256MB of RAM for its OS and system apps, and includes an extra RAM chip just for that.
meanwhile, microsoft's OS and apps require 3 GIGABYTES of RAM, and they just take that from main memory instead of including more...

wow. seems like yet another reason to get a PS4 rather than an xbox. the difference in game quality may not be apparent right now, but two or three years down the line, when developers are eeking out every last bit of performance from these machines, PS4 games will look better and offer more features.




RE: huh?
By chromal on 5/19/2014 12:59:28 PM , Rating: 2
Or a good reason to game with a PC instead of an xbox in general. Console systems are always a snapshot in time of what state-of-the-art was, whereas you can always build a current state-of-the-art PC.


RE: huh?
By Reclaimer77 on 5/19/2014 1:26:44 PM , Rating: 2
We know that. Everyone knows that. Stop pointing that out EVERY TIME the consoles are discussed. It has nothing to do with the subject.

Look I'm part of the "PC master race" too, I get it. But we just don't need it brought up constantly where it doesn't belong.

A lot of people don't have the technical knowledge to build a gaming PC. And NOBODY sells a gaming PC worth a damn at anything close to $400.


RE: huh?
By Hakuryu on 5/19/14, Rating: -1
RE: huh?
By ianweck on 5/19/2014 2:01:28 PM , Rating: 5
quote:
consoles will have motherboards where you can add RAM

Why would they do that? That pretty much defeats the purpose of having one stable platform to develop for.


RE: huh?
By inighthawki on 5/19/2014 2:03:05 PM , Rating: 5
Exactly. The consistencies in consoles is the one and only selling point to me. What's the point is each console game now has a "minimum requirement"?


RE: huh?
By GulWestfale on 5/19/2014 3:53:40 PM , Rating: 2
yes, PCs are superior for certain games; i play almost exclusively on the PC myself. but given th ecurrent choice between PS4 and xbox one, it is becoming more and more apparent that the PS4 was intelligently designed as a console, whereas the xbox appears to be little more than a standard PC with a fancy interface, running pretty standard (ie, bloated) windows.


RE: huh?
By inighthawki on 5/19/2014 4:11:29 PM , Rating: 2
quote:
whereas the xbox appears to be little more than a standard PC with a fancy interface, running pretty standard (ie, bloated) windows

I believe what you actually mean, instead of purposely trying to be insulting, is that the xbox one is trying to be a more general purpose device, while the PS4 holds the advantage for gaming due to its graphics performance.


RE: huh?
By GulWestfale on 5/19/2014 5:17:37 PM , Rating: 4
well, these ARE gaming machines... purpose-built gaming machines.


RE: huh?
By inighthawki on 5/19/2014 6:28:42 PM , Rating: 3
That's not the impression I really got with XBO. It felt more like they were aiming for an all-in-one general purpose set-top box that replaced all your other devices. Gaming is a high priority, but not as much as it was to PS4.


RE: huh?
By Mitch101 on 5/19/14, Rating: 0
RE: huh?
By retrospooty on 5/19/2014 6:43:52 PM , Rating: 5
LOL... I will believe it when I see it released and independently tested. "Twice the performance" sounds nice, but exactly what metric is "twice"? Right now the bottleneck by a HUGE margin is the VRAM bandwidth. Doubling several areas may affect actual performance little when pushing high res textures through a narrow DDR3 pipe... Or perhaps it means ""Twice the performance when using the same dumbed down textures so IQ still sucks"

- Wait and wee before you jump for joy.


RE: huh?
By inighthawki on 5/19/2014 8:19:04 PM , Rating: 4
I think this is just a case of the person in the article completely misinterpreting something from Microsoft and then other people quote it as a source.

DX12 is an upgrade to the software API. It's very possible it may achieve 2x performance in terms of graphics stack overhead. That means if the CPU work related to rendering in your game is 5ms, it will now be 2ms. That is not the same thing as a 2x perf improvement.

There will almost certainly be some changes that will improve overall improvement. But nonetheless I have to agree: It might bring them closer to PS4, and maybe hit 1080p, but if you're expecting a 2x improvement, you're going to be sorely disappointed.

quote:
Doubling several areas may affect actual performance little when pushing high res textures through a narrow DDR3 pipe

To be fair, there are a few API-level optimizations that can be used to reduce memory bandwidth costs that aren't available in DX11.X that could easily be done in DX12 :)


RE: huh?
By retrospooty on 5/19/2014 10:03:56 PM , Rating: 3
"think this is just a case of the person in the article completely misinterpreting something from Microsoft and then other people quote it as a source"

Agreed. That is a wildly huge performance claim. There is some heavy misinterpretation going on.

"To be fair, there are a few API-level optimizations that can be used to reduce memory bandwidth costs that aren't available in DX11.X that could easily be done in DX12"

This is always possible, but for the most part it has been done and done again for the past decade. There just isnt that much juice left to squeeze so to speak.


RE: huh?
By Insurgence on 5/21/2014 11:18:04 PM , Rating: 2
quote:
This is always possible, but for the most part it has been done and done again for the past decade. There just isnt that much juice left to squeeze so to speak.


Actually each generation of consoles resets this as they are then trying to pull out as much power of the current gen consoles as possible, and the new hardware gives them new metrics and code to work with. Theres a reason why you can see quality improvements across a franchise that spans from the initial console release to that consoles end of life.

It was not until MS stepped into the game that they started really doing this on the hardware side rather than the software side. The game developers used to use it as a way of building hype for a new game. I remember this with the PSX Final Fantasy Games. If technology ever got brought up, they would talk about the new things they did to try and ook out more performance.


RE: huh?
By Mitch101 on 5/20/2014 12:39:19 AM , Rating: 2
This will be huge. Listen to the comments from Intel and NVIDIA.

Start at 1:00 in the video in this article
http://bgr.com/2014/04/07/xbox-one-directx-12-upda...
Start at 2:45 for comments from AMD, Intel, Nvidia to back it up.

Is this marketing spin?
http://blogs.msdn.com/b/directx/archive/2014/03/20...
We (the product team) read the comments on twitter and game development/gamer forums and many of you have asked if this is real or if our marketing department suddenly received a budget infusion. Everything you are reading is coming directly from the team who has brought you almost 20 years of DirectX.

It’s our job to create great APIs and we have worked closely with our hardware and software partners to prove the significant performance wins of Direct3D 12. And these aren’t just micro-benchmarks that we hacked up ourselves – these numbers are for commercially released game engines or benchmarks, running on our alpha implementation. The screenshots below are from real Direct3D 12 app code running on a real Direct3D 12 runtime running on a real Direct3D 12 driver.


RE: huh?
By inighthawki on 5/20/2014 3:08:32 AM , Rating: 2
Yes this is just a misinterpretation of information. That, or Brad is an utter moron. DX12's big improvement is with CPU overhead, not GPU performance. There's no amount of software improvements that can make a GPU run faster.

With software improvements you can reduce the overhead costs and improve scheduling to get more work on the GPU at once, but you cannot make the GPU faster than it is. There are some optimizations it CAN do that could make the hardware slightly more efficient, but this will never be 2x.

Later in the video they talk specifically about the Xbox's 11.X API being single threaded. They are referring to the software API and CPU overhead. This is purely CPU performance improvements. This is what they are talking about. And it won't make that many games better. Any modern game engine is already going to ensure the GPU is scheduled with enough tasks to ensure it doesn't go idle. The only thing DX12 will improve here is the amount of CPU time per frame. The rendering work will still take approx the same amount of time to complete. You may just have some extra time now to do more physics or AI.

And yes, I've seen that blog. And you need to know what you're looking at. As I just mentioned, those are CPU numbers, not GPU numbers. That is the overhead inside of the DX runtime, user mode driver, kernel mode driver, and the directx kernel. You could literally reduce those all to zero, but if your game is GPU bound, your game won't run any faster. This is why they've shown games like Starswarm get "10x perf improvements!" It's because the game is EXTREMELY CPU bound. That is, the CPU completely bottlenecks
the GPU (The GPU can process work way faster than the CPU can submit work).

I've seen some really dumb comments about the 2x as fast and the fact that it'll even make the xbox one run hotter. This is all BS. These people don't know what they're talking about.

Now I'm not trying to knock DX12. It's going to be pretty slick (I've had the opportunity to play with early access code), but don't let dumb people trick you into thinking it's something it's not.


RE: huh?
By Mitch101 on 5/20/2014 11:55:16 AM , Rating: 2
There are a bunch of sites which are talking about this but waiting till E3 in June should put more light onto this as Im sure it will become a huge talking point I would expect several demo's on whats to come. In less than a month everyone should know more.

2x is not ideally important to achieve the XBox one is close to doing 1080p in a lot of games and some games already do it so a 20-30% boost will certainly solve the 1080p some users have fault with. I did read that it will certainly take all games that render at 720p up to 1080p putting an end to the argument about 1080p gaming. Not all PS4 games render in 1080p either. Also most people probably sit far enough back from thier TV to notice anyhow but specs are a big thing to some vs overall game play.

Really were still in the beginning stages of both consoles and developers will get better and game engines more optimized for both consoles over time as will the development tools that surround both consoles.

Very true scheduling is improved significantly and that will provide a huge boost as your no longer waiting for the first core to render the scene but consider the bare metal access that's going to be added. There is much to be gained from essentially going from single core holding performance up to quad core performance because its equally scheduled. That alone may give 15-30% to performance.

When they are using bare metal access there is a massive performance jump because it doesn't have to be rehashed by the CPU to pass along to the GPU again for processing. Your bypassing a major bottleneck and solving a memory bandwidth limitation.

Its like comparing Visual Basic to Assembly language. VB has to go through an interpreter and there is a large performance hit when doing so however if the code were written in assembly which is like bare metal for the CPU then the performance increases are substantial. Being able to access that bare metal layer will yield some major improvements.

Will the XBox one catch up to the PS4 in raw performance no but don't overlook the power of Microsoft's Direct X over Open GL. Not everyone is rushing to develop on Open GL over Direct X which should tell people a lot about how good Direct X really is as a development platform. Microsoft doesn't have to double the performance it would be amazing if they do but a 20-30% jump from the API would certainly end the 1080p render arguments null.


RE: huh?
By inighthawki on 5/21/2014 2:14:06 AM , Rating: 2
Don't take this personally, but I think you're talking out of your *ss. I'm not really convinced based on your reply you really understand the graphics pipeline.

CPU and GPU work is orthogonal. GPU work is done in parallel to command buffer generation on the CPU. There is really only one way to starve the GPU of work and cause a perf hit - You have to generate enough commands on the CPU at a single time that it takes longer to generate a command buffer than it does for the GPU to finish the first one. This is pretty rare and occurs typically only on slow CPUs with much much faster GPUs, or when the game/app tries to do too much work.

Where do you think AMD's magical 10x perf improvement by using Mantle comes from? Hint: They use a game that's purposely doing a ridiculous number of draw calls to completely bottleneck the GPU by causing the rendering API to take forever to generate commands. The CPU might take 50ms to generate the command buffer for a frame, and the GPU might finish it in 15ms.
Mantle works by reducing the runtime overhead allowing a lot more draw calls per frame. The CPU can submit 10x the number of draw calls per frame, and the GPU is no longer starved for work. If that improvement allows you to do command buffer generation in 14ms, then the GPU is never starved, and you see a huge jump in performance. But it's not because the hardware is more efficient, or programmed any better.


RE: huh?
By Mitch101 on 5/21/2014 8:25:52 AM , Rating: 2
Microsoft is saying the workload in Direct X 11 is not very parallel on the X-Box one currently and the updated DX12 is making it more parallel through a better scheduler. The pictures alone point that out where you can see the stack of core 1 is overloaded while the other 3 cores are empty with almost one off jobs. If your waiting on core1 you might as well be a single core processor. DX12 makes it much more parallel freeing up the CPU utilizing more of the cores which will increase heat because the CPU/GPU is running much hotter but the XBOX one was designed for this.

E3 is coming let Microsoft demonstrate.

If the Xbox one was forced to do higher resolutions than 1080P then there might be an issue but if all anyone is making a deal about is 1080p gaming DirectX12 will solve that issue.

MIcrosoft goes into the rendering and ms generation but I have to run to work. Its in the links I provided above.


RE: huh?
By inighthawki on 5/21/2014 11:16:23 AM , Rating: 2
They are talking about CPU parallelism. Command buffers can really only be recorded on a single CPU thread. They have deferred contexts, but they must be submitted to the immediate context on the main thread for completion, and the performance of them kinda sucks (In some cases it was worse than single threaded perf :)). DX12 is inherently multithreaded and allows you to record commands on multiple threads.

But again I cannot keep stressing this enough: This is *CPU performance*. Whether your game runs on a single CPU core or 100 CPU cores, it does not change the fact that if the GPU is not starved, it will not see a performance improvement by improving the CPU overhead.

Any half decent game engine will also not make those cores idle. Even if you're inherently single threaded, they can still use those cores for other work (AI, physics, audio, etc). Things that are orthogonal to the rendering engine.


RE: huh?
By Mitch101 on 5/21/2014 12:11:56 PM , Rating: 2
I think you captured it right there with the addition that they realize the GPU is better suited for certain processes over the CPU (Now that they can access the bare metal processes) so certain functionality will no longer be handled by the CPU but the GPU. (I believe collision detection is one of them, thats a huge thing to move to the PU) Some of those will also be done at bare metal levels - which is like removing an interpreter because you no longer have to calculate it the hardware is determining it. Much more will be placed across the CPU cores and GPU cores as a result heat will increase but the parallelism will also increase significantly because no single core should be holding things back. This is software finally catching up with the hardware being used. If you go into any machine today and look at CPU utilization the cores are almost never equal in load. There is generally one core over and holding everything else back.

In a way this is taking a single CPU and making an 8 core cpu 4 of which will handle items which makes sense to compute in a CPU and 4 where it makes sense to use a GPU. Its much bigger than just sharing CPU cores more effectively.

It should feel like the first 3DFX card arrived when we moved from doing software rendered graphics over to the GPU rendering the scene.

The 2X quote comes from Microsoft had it come from AMD/NVIDIA or even Intel I would question it more as Marketing but there seems to be a lot of sources saying this is a huge step forward and a major performance increase. Where they seem to be utilizing it in a lot of the demo's like those which came from AMD was putting significantly more objects moving around in scenes and that was just bare metal enhancements which on low end chips like this is about a 30% increase. Pair that with the CPU scheduling and the offloading of functionality moving to the GPU.

This could very well be the first real increases everyone was hoping for with embedded GPU's not one off processes being handed to the GPU from specific software.


RE: huh?
By inighthawki on 5/21/2014 1:07:12 PM , Rating: 2
OK, I'll go through this step by step:

quote:
I think you captured it right there with the addition that they realize the GPU is better suited for certain processes over the CPU

That was not my implication at all, and that's also not the goal of DX12.
quote:
Now that they can access the bare metal processes

You keep using the term bare metal, but do you know what this means? I don't think you do.
quote:
so certain functionality will no longer be handled by the CPU but the GPU. (I believe collision detection is one of them, thats a huge thing to move to the PU)

Moving work off the CPU to a GPU that is already saturated won't improve performance, and in fact may make it worse.
quote:
Some of those will also be done at bare metal levels

Again, do you know what that means?
quote:
which is like removing an interpreter because you no longer have to calculate it the hardware is determining it

No, this is nothing at all like an interpreter. I don't know where you got that analogy but it is wrong. Wrong wrong wrong.
quote:
Much more will be placed across the CPU cores and GPU cores as a result heat will increase but the parallelism will also increase significantly because no single core should be holding things back.

Running single threaded is not holding your other cores back. They will do other work at the same time. Most modern games engines have task schedulers that keep all CPU cores saturated.
quote:
If you go into any machine today and look at CPU utilization the cores are almost never equal in load. There is generally one core over and holding everything else back.

Maybe for basic software, but modern game engines typically make full use of all cores.
quote:
In a way this is taking a single CPU and making an 8 core cpu 4 of which will handle items which makes sense to compute in a CPU and 4 where it makes sense to use a GPU. Its much bigger than just sharing CPU cores more effectively.

I don't even comprehend what this means.
quote:
The 2X quote comes from Microsoft had it come from AMD/NVIDIA or even Intel I would question it more as Marketing but there seems to be a lot of sources saying this is a huge step forward and a major performance increase

Microsoft's benchmarks show a 2x *CPU* performance improvement. This IS NOT the same thing as a 2x performance improvement. What the graphs on their blog show is that splitting the rendering work across cores can allow them to generate the command buffers 2x as fast. The work they show in those graphs does NOT contain the cost of running the work on the GPU. This is a fundamental difference between latency of submission, and throughput of work.
quote:
Where they seem to be utilizing it in a lot of the demo's like those which came from AMD was putting significantly more objects moving around in scenes and that was just bare metal enhancements which on low end chips like this is about a 30% increase

Again, this is because of a reduction in CPU overhead on low end CPUs. The improvement in CPU performance no longer makes the CPU the bottleneck of the system, and doesn't starve the GPU of work.
quote:
Pair that with the CPU scheduling and the offloading of functionality moving to the GPU.

I'd like you to define what you believe the "CPU Scheduling enhancements" are just so I have a clear picture. Are we talking about multithreaded command buffer recording and submission, or the actually GPU scheduler present in the directx kernel? The two are completely different.
quote:
This could very well be the first real increases everyone was hoping for with embedded GPU's not one off processes being handed to the GPU from specific software.

Can you clarify what you mean here?


RE: huh?
By Insurgence on 5/21/2014 11:20:38 PM , Rating: 2
quote:
Yes this is just a misinterpretation of information. That, or Brad is an utter moron. DX12's big improvement is with CPU overhead, not GPU performance. There's no amount of software improvements that can make a GPU run faster.


My computer has software that makes my GPU run faster. It kind of nice to not have to get into the hardware or firmware side to do that anymore.


RE: huh?
By inighthawki on 5/23/2014 12:22:50 PM , Rating: 2
That's not what I mean. Obviously there is software to *configure* the GPU to overclock it and whatnot. And we can get technical about optimizations that actually can improve the efficiency of the GPU (reducing bandwidth costs, changing certain specifications, etc), but at the end of the data, improving the software overhead in DX12 will not improve the GPUs performance. That was what I was trying to get across.


RE: huh?
By retrospooty on 5/20/2014 8:12:14 AM , Rating: 2
You really don't understand the info at the links you posted do you?


RE: huh?
By Mitch101 on 5/20/2014 11:57:28 AM , Rating: 2
Are you a game developer? I think not.

How about we let Microsoft explain and demo at E3.


RE: huh?
By retrospooty on 5/20/2014 1:30:01 PM , Rating: 2
How about we wait and see about your 2x performance increase. LOL.


RE: huh?
By Mitch101 on 5/20/2014 3:28:32 PM , Rating: 2
Based on comments from Intel, Nvidia, and AMD they already say its well beyond 2x performance increase. Does that mean 2x framerate NO thats not what they are saying.

If they are able to properly schedule and not have the current DX11 bottleneck a single core leaving the 3 other cores empty and leverage bare metal then 2x performance is certainly possible.


RE: huh?
By retrospooty on 5/20/2014 4:09:51 PM , Rating: 2
I think the point inighthawki is making is that 2x increase in CPU overhead or any other specific area you focus on doesn't mean an actual 2x performance increase. There are too many aspects to look at and people are reading way too much into this...

What I am saying is that no matter what happens, the DDR3 memory will still be a bottleneck. Because of that, it will still use low res textures and still look like crap.


RE: huh?
By inighthawki on 5/21/2014 2:26:54 AM , Rating: 2
Correct. The CPU and GPU operate in parallel. The CPU generates command buffers which are submitted as graphics workloads to the GPU. The GPU executes this in parallel to the CPU as it generates the next workload. If the GPU is able to process the workload faster than the CPU can generate them, then the GPU is starved - it finishes and sits idle waiting for more work. This is what low level APIs like DX12, Mantle, and new OpenGL improvements aim to improve. But if you already submit enough work to the GPU to get it from going idle, you don't see nearly as huge of an improvement.

Effectively you can improve the CPU performance by 10000% and make your rendering cost effectively nothing at all, but if the GPU is saturated, your framerate isn't going to improve.

Your other subsystems, such as AI, physics, etc, will see a nice improvement in available CPU time, but that's another story and shouldn't impact the GPU's performance. The games that will primarily benefit from these perf improvements are games whose CPU budget doesn't always fit in 16ms.

DX12 can absolutely also bring actual perf improvements to the GPU workloads, but none of what Mitch has pointed out are related to such improvements. I do expect to see an overall improvement in performance, but 1) Not for the reasons he described, and 2) Not nearly as much as I think he suspects.

FYI: This is all coming from someone super psyched about DX12. I really really want to write some apps using it. I'm just trying to be realistic about what the performance improvements are actually going to be, and why.


RE: huh?
By Mitch101 on 5/21/2014 12:19:18 PM , Rating: 2
The GPU is saturated if you aren't using the bare metal command set and Microsoft says this in their Forza demo which partially utilizes bare metal DX12 to achieve 1080p 60fps easily.

DirectX12 is two improvements not just CPU scheduling but graphics API enhancements.


RE: huh?
By inighthawki on 5/21/2014 12:20:58 PM , Rating: 2
You're throwing out a bunch of buzz words but you aren't saying anything.


RE: huh?
By Mitch101 on 5/21/2014 12:52:59 PM , Rating: 2
All I can say is wait for the demo's most likely at E3 then.

Not for me to prove its for Microsoft to prove they are the ones making the claim.


RE: huh?
By inighthawki on 5/21/2014 1:09:08 PM , Rating: 2
But I know what Microsoft's claims are, I'm not sure you do. I'm not asking you to prove anything to me. In fact my last 10 posts or so have been me trying to explain to you how graphics works so you can understand what that 2x number on their blog means.


RE: huh?
By Reclaimer77 on 5/19/2014 8:44:04 PM , Rating: 1
quote:
DirectX12 will solve the resolution gap between XBox One and PS4.


/spit take's Coke

Ahahahha right. DirectX 12 is coded with magical fairy dust!!!

LMAO!!!


RE: huh?
By TheJian on 5/22/2014 1:46:20 AM , Rating: 2
Hogwash. NV already showed they caught AMD in Star Swarm even with DX11 (and beat them actually). As the OP said, maybe 2x faster in a single metric but in the end nowhere near 2x in a game.

I won't hold my breath waiting for FPS to double on xbox1.


RE: huh?
By Insurgence on 5/21/2014 11:10:32 PM , Rating: 2
Actually, if anything, this version of windows is less bloated than the normal version of windows. Still not optimal for consoles, they really need to just build an OS from scratch vs just taking out the things they never needed in windows to begin with.


RE: huh?
By Camikazi on 5/20/2014 5:55:18 PM , Rating: 3
Nintendo did it long ago with the N64, the expansion pack was adding more RAM so you could play games at higher resolution or other options. It really wasn't that bad and was rather simple to do since you just open a cover and insert the pack and done.


RE: huh?
By MrBlastman on 5/19/2014 4:37:07 PM , Rating: 1
I think everyone is missing the most important nugget from this story--the one where Crytek is complaining about RAM available on the console.

The truth is--CryEngine stinks! It is cheaper than alternatives out there so you find developers building product on it--but the hoops they have to jump through from what I gather are large. Oh, sure, it gave us sparkly nice graphics a few years ago but it is full of limitations that developers spend ridiculous amounts of time on trying to work around.

Take Mechwarrior Online--or even Mechwarrior Living Legends... tons of problems they have had to work through. Living Legends managed to deal with it whereas MWO is floundering, barely able to tread water with all the technical issues they keep creating or trying to fix.

Then there is Star Citizen--with all the untold millions of dollars they have, their progress is painstakingly slow. Look across the street at Elite: Dangerous, NOT using the Cryengine... it is looking very impressive as the weeks go by. The rate of progression is significantly faster which is astonishing given their dramatically smaller budget.

CryEngine is terrible. The guys who make the engine--complaining about available RAM, is indication enough that the engine has quite a bit of bunk under the hood.


RE: huh?
By ats2 on 5/19/2014 5:09:41 PM , Rating: 1
Um, MWO problems are pretty much MWO's problems. It has little to nothing to do with cryengine.

As far as E:D vs SC, SC looking 10x better, and their delays are not related to the graphics engine, but rather the integration of their server back end and their rather impressive physics simulation.

TL;DR you haven't a clue of what you are talking about.


RE: huh?
By MrBlastman on 5/19/14, Rating: 0
RE: huh?
By FITCamaro on 5/20/2014 7:52:04 AM , Rating: 2
Everything I've seen from Star Citizen is that it's going slow because of the crazy amounts of detail they're putting into the game. They're trying to make it work like the real thing might.


RE: huh?
By lothar98 on 5/19/2014 10:08:14 PM , Rating: 2
N64 already did this years ago for very minor gains in games like Starfox64. It was a "neat" thing to have but brought very little value to the table and probably had a very low take rate.


RE: huh?
By retrospooty on 5/19/2014 2:46:36 PM , Rating: 3
"One day when hell freezes over, consoles will have motherboards where you can add RAM. The more complaining about limitations, especially late in the console cycle, and comparisons to PC hardware might force console makers to allow upgrades."

That wont happen. The point of consoles and their strength is to have a solid stable platform for developers to make games on. They address totally different markets. Most people that are looking for a console arent going to build a PC, nor buy one for that purpose. They are in the market for a console and they are going to buy a console. Comparting consoles to PC's is pointless. It's like comparing Oranges to Bananas.


RE: huh?
By SeaLurk on 5/19/2014 4:05:49 PM , Rating: 3
It already happened. Nintendo 64 had expandable RAM (from 4 MB to 8 MB).


RE: huh?
By retrospooty on 5/19/2014 4:14:23 PM , Rating: 3
And how did that work out for them?

Today, it wont... Having the same console with a different spec is the opposite of what they are made for. It's a single consistent platform with an exact spec to develop to.


RE: huh?
By hpglow on 5/19/2014 4:47:29 PM , Rating: 2
As did the saturn and tg-16. Very few games utilized the n64 memory cart. The Saturn's was only released in japan but it was worth picking up for Xmen vs Street fighter. The TG-16 had two but only one was released in NA but if you had the CD rom it was worth having as well.

What do these consoles all have in common? They were all bottom of the heap in their gen. Consumers do not like console add ons. These are often tactics attemped by the underdog to play catch up. None have ever worked.

Ram is so cheap why not put in 16GB? I guess console players must either sit too far away to notice or just love low detail textures.


RE: huh?
By ritualm on 5/19/2014 5:06:48 PM , Rating: 2
quote:
Ram is so cheap why not put in 16GB?

Because DRAM spot prices have risen since the heydays of $50 per 16GB or something ridiculous, and because most studios design games for consoles first. Folks like Crytek are few and far between.


RE: huh?
By retrospooty on 5/19/2014 5:12:21 PM , Rating: 4
"Ram is so cheap why not put in 16GB? I guess console players must either sit too far away to notice or just love low detail textures."

I am with you there, I HATE low detail textures. But the bottleneck there with 8gb RAM isnt amount of RAM, its the speed of the RAM. 16gb filled up with uber-high res textures isnt going to be much benefit especially for the slow RAM on the XBO. Now, if it were a perfect world, they would use 16gb of faster RAM and better GPU's... But a line has to be drawn somewhere and production started at some point.


RE: huh?
By BZDTemp on 5/20/2014 3:08:33 AM , Rating: 1
Considering the wast majority of PC buyers treat their PC's as consoles with regards to newer upgrading their hardware, then most PC's could are really just as stuck as consoles so your wish isn't really more a dream than a possible reality.

With consoles there is some benefit to them all* having the same hardware, this means a possibility for more optimized used of the hardware and therefore getting more out of what is there. Of course that only goes so far and as a current gaming PC is easily ahead of consoles, in most respect, PC games can be more advanced and sometimes do.

I dream of a GTA title made to target the upper end of PC's only , such a game could be so much more than what we already see today.

*Of course we, again, see Microsoft fucking this up just as they did with the 360. Not to mention the One is essentially running Windows with a shell on top, which is why there is so much memory wasted on the OS.


RE: huh?
By FITCamaro on 5/20/2014 9:47:03 AM , Rating: 2
It actually would have just been GTA V with much higher resolution textures and maybe some better physics and effects. GTA V pushed the boundaries of what you can do in 512MB of RAM. And it was nothing short of amazing.


RE: huh?
By AssBall on 5/20/2014 11:47:52 AM , Rating: 2
^^ This

The resolution of your textures is not half as important as what your artists can do with the textures.


RE: huh?
By jasonb on 5/20/2014 7:32:25 PM , Rating: 2
Maybe this console cycle will be shorter. I can't imagine the current xbox one being kept on life support for 8 years like the xbox 360. It is just not nearly as beefy compared to today's PC's as the xbox 360 was on release.


RE: huh?
By Bateluer on 5/20/2014 10:19:03 PM , Rating: 2
> And NOBODY sells a gaming PC worth a damn at anything close to $400.

I'd like to point out that building a PC is not some monumentally difficult task. There's plenty of online resources and guides, and if you've got the brains to put the round peg into the round hole and the square peg into the square hole, you can build a PC.

Secondly, while I'll admit its hard to build a PC comparable to the PS4 for a 400 dollar price tag, building a PC that exceeds the far weaker Xbox One for a 500 dollar price tag is fairly easy.


RE: huh?
By hanznfranzen on 5/26/2014 7:42:09 PM , Rating: 2
Yes, exactly. It seems there is always that one person that has to throw the "build a pc" comment into every console discussion and it has gotten old. I too game 99% of the time on a PC and you make a good point about the price and knowledge entry barrier into PC gaming for the masses, but my thing is that they don't sell Gran Turismo (and a few other exclusives) on PC, therefore consoles are relevant. So there is my reason. Some people need to think outside of their basement every once in a while.


RE: huh?
By WldAntc on 5/19/2014 1:33:13 PM , Rating: 2
If this means they won't do another lazy console-PC port then maybe a lesson was learned. If you primarily program for what has greater controls and better hardware you can dumb it down for consoles.
Crysis 2 was just the opposite and as a result left many PC gamers (who they should have shown a bit of loyalty to) thoroughly bored if not outright furious.


RE: huh?
By RjBass on 5/19/2014 5:36:46 PM , Rating: 2
They down voted you but what you said can't be said enough. Pay $500 for a limited and outdated console system or get an entry level gaming PC and upgrade it over time. That is what I have been doing for the last 8 years and it is truly a much better gaming experience.


RE: huh?
By Bateluer on 5/20/2014 10:16:22 PM , Rating: 2
The XB1 would have been state of the art had it been released in 2010 or 2011. In 2014, coming armed with a netbook CPU and a 2 year old low end GPU is downright bad. Akin to bringing a knife to a nuclear war.


RE: huh?
By FITCamaro on 5/19/2014 1:03:29 PM , Rating: 4
No. The PS4 also uses quite a bit of memory for other tasks.

http://www.dualshockers.com/2014/03/11/naughty-dog...


RE: huh?
By FITCamaro on 5/19/2014 1:04:59 PM , Rating: 3
There's been plenty of articles about how both consoles use 2.5-3GB of RAM for non-game purposes. The ability to instantly switch back to the main UI means both are constantly running the system OS. And then you add in ability to stream which takes up system resources. Ability to record gameplay. And more.


RE: huh?
By JasonMick (blog) on 5/19/2014 2:10:23 PM , Rating: 5
Good point on your previous post.

The situation is admittedly confusing.

What we know for sure is that for most memory (e.g. texture memory) Sony has the faster memory (GDDR5) while Microsoft has the cheaper memory (DDR3) so is probably more profitable.

As for how much Sony devs get, that's been a much debated and misunderstood point...

This piece for example, suggests that developers get up to 6 GiB on the PS4. But if the author is right, this must be some special case (as you'll see from later links).

http://gearnuke.com/memory-biggest-bottleneck-play...

This article gives some more possible insight....

http://www.lazygamer.net/general-news/ps4-gives-de...

Allegedly Sony is telling developers to expect 4.5 GiB of memory....

BUT there was some misinformation/inaccuracies in the source of that piece (a Eurogamer interview). Here's the clarification from Sony...

http://www.vg247.com/2013/07/26/ps4-has-up-to-5-5g...

Current games can have 5 GiB of GDDR5 on the PS4 and a 0.5 GiB page file (stored on HDD). The game controls 4.5 GiB of the GDDR5 memory, the OS controls 0.5 GiB although that chunk is guaranteed and fully owned by the game other than creation and destruction which are handled via proxy from the OS.

Sony in the past, though has bumped the memory size via firmware updates over the life of its console, gradually pushing the boundaries as its firmware matures...

http://www.joystiq.com/2010/02/23/70mb-of-addition...

It's entirely possible that Microsoft is doing the same thing, although I'm not sure.

I guess the long story short is Microsoft and Sony both offer developers 5 GiB of physical RAM, but in Sony's case, at least, that total is expected to possibly go up as some memory may be left idle for precautionary reasons.


RE: huh?
By FITCamaro on 5/19/2014 2:16:50 PM , Rating: 2
I'd expect both consoles to improve over time. For better or worse, Microsoft seems very involved in trying to give developers as much resources as possible. Likely because they're at a hardware disadvantage by default.


RE: huh?
By StevoLincolnite on 5/19/2014 7:18:09 PM , Rating: 2
I think the reasoning of WHY so much Ram has been reserved is simply because... Sony and Microsoft can always give Ram back in years time, but they can't taketh away.

Being new consoles things aren't super tight and optimized in the code department, new features probably will still need to be implemented and they would have a long-term goal in mind.

Overtime, much like the Xbox 360 and Playstation 3, their respective Operating Systems will get smaller, leaner, faster.


RE: huh?
By nikon133 on 5/19/2014 5:26:55 PM , Rating: 2
256MB is for background processes, in my understanding... downloads, uploads and such. PS4 can receive firmware (maybe games too, not sure) updates while in standby mode.

My understanding is that developers have around 6GB free by default, though there were stories that additional .5 - 1GB can be made available if required (presumably by offloading parts of OS from RAM). I'm kind of surprised with this complain, at least at present... considering that Crytek managed all their current and past engines to run on PS3 256MB of RAM (out of which at least 50MB was used by RAM in later OS optimizations, down from 120MB on release - much as I could find), they should be on cloud 7 with 6GB available. Even if they spare 2GB for graphics, they still have 4GB for game code. If they could make Crysis 3 run on 200MB or less, I can't see why next 2 - 3 Crytek engines will not be comfortable with 4GB - that is what? 20x more?


RE: huh?
By FITCamaro on 5/20/2014 8:01:48 AM , Rating: 2
PS3 had 256MB just for graphics and then 256MB for other tasks. But the GPU could indirectly access the other half and vice versa for the CPU/SPUs.


RE: huh?
By nikon133 on 5/20/2014 5:00:18 PM , Rating: 2
I know, I'm talking about RAM available to CPU in PS3.

Out of 256MB, early versions of PS3 OS were taking around 120MB for system needs, leaving around 125MB for games. Sony has optimized their OS down to 50MB of system RAM, leaving a bit over 200MB for games.

PS4 can spare 2GB for graphics needs and still have 4GB for game code, which is 20x more than what PS3 has available for games.


RE: huh?
By random2 on 5/19/2014 9:09:41 PM , Rating: 2
I have a sneaking suspicion consoles are going to be upgraded a little more frequently in the future, or left behind. Two or three years down the road it will be time to get a new console or move into a PC for gaming. Actually, that's probably a pretty good idea now if you like a higher level of detail and physics.


RE: huh?
By althaz on 5/20/2014 1:13:04 AM , Rating: 3
Sony's OS uses 3.5Gb of RAM (yes, that's more than MS reserves in the XBox).

The difference is that on the PS4 game devs can request that RAM (although 2Gb is the most it will give up). Also the PS4 doesn't explicitly "reserve" this memory, it does so implicitly - that means you (the developer) have to choose how to get access to that extra memory. On the plus side, if you need some of it for a short time, you can simply suspend things that definitely won't be running anyway and access that memory without any ill-effects.

The XB1 explicitly reserves this RAM and there's an API to request the relase of resources to your game. Initially the XB1 didn't allow devs to take that RAM off the OS.

The XB1 setup (with regards to memory management) is clearly better...but it doesn't matter too much, because both systems have plenty of memory.

The distinguising hardware isn't memory available (which is the same between the two consoles, aside from with launch titles), but the bandwidth and rendering power available (heavily favouring the PS4).


RE: huh?
By piroroadkill on 5/21/2014 4:54:19 AM , Rating: 2
Uh, the difference is already massively apparent.

Xbox One games are shipping at 1280×720 or 1408×792 (a resolution pulled from Microsoft's ass for the purpose of looking better than '720p').

PS4 games are shipping at 1600×900 or 1920×1080.

The difference, as you said, will only get larger as assets fill the space.


RE: huh?
By darkpuppet on 5/22/2014 11:18:02 AM , Rating: 2
PC gaming sucks. Deal with it.

It's expensive, loud, power hungry, OSes take up too much memory, Windows catches colds, Apple has no games, and Linux is a pain in the ass.

Console gaming sucks. Deal with it.

It's static, can't be upgraded, the games don't look as fancy as PC games, they're closed systems, and poor people can afford them.


What he is really saying is
By Rage187 on 5/19/2014 1:03:22 PM , Rating: 2
I'll paraphrase what he is saying:

"We shouldn't have to optimize for consoles. It should just work like PCs."




RE: What he is really saying is
By FITCamaro on 5/19/2014 1:06:06 PM , Rating: 2
Never before though has that been so close to the case. System architecture is nearly identical to an actual PC. The only difference now is the unified memory instead of separate system and graphics memory.


RE: What he is really saying is
By Monkey's Uncle on 5/19/2014 1:49:29 PM , Rating: 1
+1 to this.

Game developers are more interested in getting their products out the door tan making them as efficient and low-impact on hardware as possible.

Gone are the days when developers are drop into assembly code to create code that runs as fast and uses next to no memory or run through their code through profilers to find bottlenecks, inefficiencies and blatant failures.

These days it is all about caching and throwing hardware at problems to work around inefficient coding techniques and slow hardware.

My advice to Crytek: 5GB of space is more than enough to run even the highest graphic quality game. Stop whining and hire smarter programmers that know their way around a profiler.


RE: What he is really saying is
By inighthawki on 5/19/2014 2:02:10 PM , Rating: 2
It's no longer feasible to sit there and hand optimize every little piece of code. Game engines (and the games themselves) are likely an order or magnitude more complex than you think they are. The sheer size of most of today's games require optimizing time and code readability over raw performance. It's not 1995 anymore. Games are quite complex.

quote:
My advice to Crytek: 5GB of space is more than enough to run even the highest graphic quality game. Stop whining and hire smarter programmers that know their way around a profiler.

This is so ignorant. Crytek probably already employs some of the most brilliant graphics programmer son the planet. I strongly suspect that they know way more about their needs than you do. I'm sure they're more than capable of fitting into their budgeted memory constraints, but that doesn't mean they can't do better with more..


RE: What he is really saying is
By bsim50 on 5/19/2014 2:11:13 PM , Rating: 2
quote:
"It's no longer feasible to sit there and hand optimize every little piece of code."

I'm pretty sure it's possible to strike a sensible balance between the two extremes of writing assembly code and doing nothing. 32-bit games typically have a 2GB limit no matter how much RAM you've got installed and Crysis 3 run just fine within that. If Crytek can't make a decent half-railroad FPS with more than double that, then they do indeed need better devs...


RE: What he is really saying is
By inighthawki on 5/19/2014 3:22:21 PM , Rating: 2
quote:
I'm pretty sure it's possible to strike a sensible balance between the two extremes of writing assembly code and doing nothing

Sure, and I would bet money they they do have optimized codepaths for plenty of things. ASM... maybe. Given the number of architectures that need supported these days, and compiler differences, they might only use optimized C/C++.

quote:
32-bit games typically have a 2GB limit no matter how much RAM you've got installed and Crysis 3 run just fine within that. If Crytek can't make a decent half-railroad FPS with more than double that, then they do indeed need better devs...

2GB of system memory, sure. Remember that the console have 5GB of SHARED memory. That means their budget for CPU + GPU memory is 5GB. The equivalent of adding system memory + GPU memory on the PC. It's not uncommon for newer games to use upwards of 1GB of system memory, and newer games with lots of high res textures can easily use the existing 3GB of memory on high end video cards. That's already over 4GB, and 5GB will be right around the corner.


RE: What he is really saying is
By Monkey's Uncle on 5/19/2014 4:45:56 PM , Rating: 1
With the above I think it would be worthwhile to Crytek and any other game developer to entice Microsoft into marketing memory expansion add-ons for their consoles. Either that or start thinking of creative ways around the limitations of the hardware platforms on which they choose to develop their games.

XB1 is by far not the only gaming platform out there -- just sayin'.


RE: What he is really saying is
By inighthawki on 5/19/2014 5:07:20 PM , Rating: 3
quote:
Either that or start thinking of creative ways around the limitations of the hardware platforms on which they choose to develop their games.

I think you might be missing the point. It's not like they aren't capable of doing it. I'm sure their engine is more than capable of scaling to a large range of limitations. They just want to be able to do more.


RE: What he is really saying is
By Reclaimer77 on 5/20/2014 9:45:54 AM , Rating: 4
I'm not saying they are sloppy coders or anything like the OP is, but these are the same guys who made the now infamous "Crysis". A game that brought even top end gaming PC's to it's knees at the time.

So I'm not sure that efficiency and working within a given performance envelope are ideals they necessarily subscribe to.

They need to adopt a different philosophy for consoles, obviously.


RE: What he is really saying is
By inighthawki on 5/20/2014 11:14:41 AM , Rating: 2
True, but in fairness, Crysis is still today one of the best looking games out there, and it's almost 7 years old. Although I will admit the engine used for Crysis could definitely have been optimized a tad. I haven't played it, but I remember hearing Crysis Warhead was a lot better, but maybe the visual quality is a lot lower?


RE: What he is really saying is
By FITCamaro on 5/19/2014 2:18:16 PM , Rating: 2
Actually this new generation is allowing developers to go back closer to the hardware. That's the whole idea behind Mantle on the PC. Taking away layers of abstraction to allow for better performance. That comes at the cost of more complex code.


RE: What he is really saying is
By ats2 on 5/19/2014 5:15:04 PM , Rating: 2
5GB of space is not more than enough. Esp not on a 64b machine with an expected lifetime of 5-8 years. Textures alone can take upwards of 2-3 GB for high quality textures. Game states etc can easily take 2+ GB for currently released games.


RE: What he is really saying is
By nikon133 on 5/19/2014 5:35:43 PM , Rating: 2
They did texture streaming before; they will just have to do it again, at some point.


RE: What he is really saying is
By nikon133 on 5/19/2014 4:55:36 PM , Rating: 2
I will disagree with that.

PCs come in large variety of CPU and GPU power, available RAM etc. Optimizing for every possible combination would be real hell... so the best developers can go for is to make game that will scale reasonably well across the hardware range. Sort of Jack of all trades - good at everything, excels at nothing.

But on console, they have only one, set in stone hardware configuration. They can analyse each part of the game and fine-tune effects, textures, geometry... to keep the game running smooth be it only one character on the screen, or big set-piece fight going on.

I do believe developers don't optimize much at present, which is why we are getting sub 1080p resolution or sub-60fps framerate - or both... but as consoles age, developers will simply have to advance their optimization in order to keep pace, or at least reduce lag behind PCs.

That is the only thing that kept previous generation afloat for so long. Without sharp optimization, we would never get visuals from Forza, GT, Halo, Uncharted, GoW (both franchises)... on hardware limited as it is in PS3 or X360.


RE: What he is really saying is
By Bateluer on 5/20/2014 10:22:20 PM , Rating: 2
Keep in mind, optimization for the new console generation amounts to lowering the resolution and lowering the frame rate targets. They are common PC hardware, following mostly standard PC APIs, that most developers go to school to learn. The X360 and PS3 were utterly unique hardware that took a good year or two for developers to really learn; before they also resorted to lowering resolutions, lowering frame rates, and LODing the hell out of everything.


This generation is retarded
By coburn_c on 5/19/2014 4:15:19 PM , Rating: 2
Consider that on the last generation they ran the entire system and some pretty complex games on 512MB of RAM. Now it takes 3GB just to run the firmware? You can run Windows 8 and a state of the art game on 4GB of combined RAM and never page out.

Not to mention the firmware is tying up what? 3 of the 8 cores?.. the 8 netbook level cores from the second tier CPU company?

When the last generation of consoles came out they were state of the art. The Xbox 360 had graphics features not yet available in PC cards. They were 2:1 subsidized and had custom built processors with more cores than most servers of the time. This generation of consoles is an absolute embarrassment and you'd be a fool to buy either one.




By CraigslistDad on 5/19/2014 4:23:47 PM , Rating: 2
Don't both consoles have to leave headroom so that they're able to stream/record without hitching whatever game is playing?


RE: This generation is retarded
By retrospooty on 5/19/2014 4:27:54 PM , Rating: 2
"When the last generation of consoles came out they were state of the art."

You are way over dramatizing that... The bottleneck for both PS3 and XB360 was the GPU. The GPU's were horribly slow compared to high end PC's of the day, same as the PS4 and XBO. CPU was irrelevant because the GPU and most specifically the video ram (or should I say lack of it) was the issue. At least today one of them has some decent VRAM in it. The worst bottleneck is addressed, and that is alot.

"his generation of consoles is an absolute embarrassment and you'd be a fool to buy either one."

You are a PC guy, I get it, I am too, but these consoles aren't made for us. They are made for people that just want to play the best games and not have to mess with a PC and all the hassle that goes with it.

YOu guys need to understand that it isnt a PC or console decision that people are making. People that want a PC will get a PC, people that want a console will get a console. People that want both will get both, but there are basically next to zero people on the fence deciding which one between the two. The PS4 has one primary competitor, the XBO, and vice versa.


RE: This generation is retarded
By inighthawki on 5/19/2014 4:35:32 PM , Rating: 2
quote:
The PS4 has one primary competitor, the XBO, and vice versa.

B-b-but but... Wii U...


RE: This generation is retarded
By retrospooty on 5/19/14, Rating: 0
RE: This generation is retarded
By Reclaimer77 on 5/19/2014 6:15:56 PM , Rating: 2
Who started this nonsense about the last generation of consoles being on par with a top gaming PC? That's just ridiculous!

This generation has the most bang for the buck by far. No other possible conclusion can be made.


RE: This generation is retarded
By retrospooty on 5/19/2014 6:56:03 PM , Rating: 1
"Who started this nonsense about the last generation of consoles being on par with a top gaming PC?"

Damn Liberals. ;)


RE: This generation is retarded
By coburn_c on 5/20/2014 4:02:10 AM , Rating: 2
quote:
The GPU's were horribly slow compared to high end PC's of the day,


That's not true, the 360 graphics card was the first unified shader card ever produced. You couldn't get that on PC when it launched, and while GTA5 looks unacceptable now, if you looked at it in the context of when the console came out it would be mind blowing. Gears of war added Unreal engine features that hadn't been seen on PC.

This gen of consoles adds nothing to the field. This gen of consoles will be surpassed by mobile chipsets before it hits 3 years old, let alone 8.


By piroroadkill on 5/21/2014 8:50:22 AM , Rating: 2
Actually, no, the Xbox 360 represented the first consumer GPU with fully unified shaders.

That was interesting, novel and impressive.

The Xbox One merely has a slightly old middling AMD GPU.


RE: This generation is retarded
By inighthawki on 5/19/2014 4:33:17 PM , Rating: 2
quote:
Consider that on the last generation they ran the entire system and some pretty complex games on 512MB of RAM

And they also look like garbage. Have you seen GTA5? I'm can't even tell if the game has shadow maps. Not to mention the texturing looks like blobs or blurred colors.

quote:
Now it takes 3GB just to run the firmware?

The "firmware" is a full blown OS. The Xbox One is running a modified version of Windows 8, and the PS4 has it's own OS based on FreeBSD.

quote:
You can run Windows 8 and a state of the art game on 4GB of combined RAM and never page out.

That depends a lot on how you define a lot of that sentence.
-Define "paged out"
-Define combined: 4-0, 3-1, 2-2, 1-3?
-Define "state of the art": Awesome game on low settings, or maxed out?


RE: This generation is retarded
By nikon133 on 5/19/2014 5:56:47 PM , Rating: 2
Well, GTA5 is large open world game. Sacrifices have to be made to make this run on 200MB of RAM (and 256MB of VRAM). What do you think, how would this game run on PC with same resources?

That being said, some exclusives were and still are among the best looking games in their respective genres. Uncharted 2 and 3 come to mind. God of War. Killzone 2 and 3. Some of them (at least) would not be ashamed to run on modern PC.


RE: This generation is retarded
By FITCamaro on 5/20/2014 11:21:08 AM , Rating: 2
GTA 5 looks amazing for what it is. Yes I'd still love to see it on PC. But for what Rockstar did with the limited resources of the PS3/360, it looks amazing.


RE: This generation is retarded
By FITCamaro on 5/20/2014 9:53:36 AM , Rating: 2
quote:
They were 2:1 subsidized


And the economy was quite difference in 2005/2006 than it is today.

If the economy was good we might have seen better machines that were more heavily subsidized. But its terrible around the world. Look at the games prices. Brand new games are marked down $10-20 within a month of release. Because nothing is selling. Sure the major hits are selling a lot. But for the more regular titles, they're cutting prices almost immediate just to try and sell them. When I got my PS4 in February, Killzone was the best game for the PS4 and it was already down to $42.


RE: This generation is retarded
By jamescox on 5/20/2014 9:00:46 PM , Rating: 2

You are missing a big factor here. The high-end has shifted significantly from when xbox 360 and ps3 came out. If you look at what video cards were available at the time, even the high-end was limited to 75 to 100 W. They did not have the extra power connections, so PCIe limit was 75 W. This is doable in a console form factor. The current high-end has gotten increasingly ridiculous. A single high-end video card is in the 250 to 300 W range now. We have completely ridiculous systems going up to 1500 W. This is a space heater. I can't put a space heater in the entertainment center under my television.

Everything in engineering is a matter of trade-offs. Microsoft could have gone up to 16 GB, since they used cheaper, more available DDR3, but I don't think there is much point if their power limited GPU/CPU is mostly limited to 720p, or even 1080p. There is no reason to support over 1080p for something meant to connect to a television. 4k will not represent a significant market segment for a long time. The power limitations are why you have 8 low-power cores, and a mid-range GPU. PCs are only higher powered due to the increased power budgets.

Most games are not cpu limited, so 4 or more cpu cores is probably sufficient. I know people still gaming on old i7-920s, i7-870s, and even a few Core 2 Duos. These cpus can still do quite well, if paired with a newer video card. Once Microsoft gets DX12 properly multi-threaded, this should remove any issues with cpu bottlenecks, if that is even happening. The main bottlenecks will be memory bandwidth and gpu compute resources.

If you look at steam statistics, most people are playing with 4 or 8 GB of system memory:

2 GB 10.85%
3 GB 15.82%
4 GB 21.27%
5 GB 1.44%
6 GB 6.54%
7 GB 2.09%
8 GB 27.43%

This would imply that game developers still need to target systems with 4 GB or less since this is about 50% of people still. It is almost 90% of people with 8 GB or less.

For video cards, it is even more strongly skewed, with most people using 1 GB cards, about 20% using 2 GB cards and even around 13% still on 512 MB cards.

512 MB 12.96%
1024 MB 36.35%
2047 MB 16.53%
2048 MB 3.95%
4095 MB 1.32%
Other 13.77%

The steam data is here: http://store.steampowered.com/hwsurvey/

To target current machines, the target should be between 4 and 8 GB system memory and only 1 to 2 GB VRAM. The UMA architecture of consoles should require less total memory overall. For PC architecture, you have to load textures and other data into system memory, and then copy to VRAM. With single, shared pool of memory, no extra copy should be needed.

I would suspect any issues right now would be due to bad implementations of GPU/CPU shared memory. I would hope they would be making a version for consoles which does not do unnecessary memory copying; the PC version still needs this with separate VRAM. Using APU style shared memory is where things are headed, so the code needs to change to take advantage of this.


Truth
By Ammohunt on 5/19/2014 1:47:08 PM , Rating: 3
yet another reason that PC gaming is the better option. The reason that these consoles only have 8GB of RAM is due to the added cost per unit it would add. You eventually get into the low end gaming rig price range where consoles make less and less sense for gaming.




RE: Truth
By FITCamaro on 5/19/2014 2:15:43 PM , Rating: 3
Be real. 8GB of RAM is plenty for a console. Especially for consoles targeting 1080p gaming. It's only been about a year since higher end gaming rigs started coming with 16GB of RAM standard. Granted that's just system RAM vs the console using that for both gaming code and graphics, but still.


RE: Truth
By HostileEffect on 5/19/2014 2:47:21 PM , Rating: 2
Laptops came with 6GB RAM over three years ago and I believe 8GB is still the bare minimum with 16 being standard. RAM is so cheap now that it was pocket change for me to throw another 16GBs in for 24GB total.

Can't upgrade consoles though. Funny, my original Crysis box from 2007 no less, is still running a P35 chipset, originally with a E4400/8800GTS640. when the FX died I slapped a Q6600@3.2, GTX670, and around 8GBs RAM, all parts which can still be UPG/OC today. Not bad for a seven year old computer that I just recently turned into a backup/file server.

Still got my PS2 on my desk, when I ever find the time I'll get around to playing that library, if I don't opt to get an emulator for it on my laptop. /face palm.


RE: Truth
By CraigslistDad on 5/19/2014 4:49:01 PM , Rating: 2
Pretty sure 4 is the standard, 8 is for enthusiast, and 16+ is for people that work with VMs/video creation. Where are you pulling your numbers from, exactly?


RE: Truth
By retrospooty on 5/19/2014 5:20:08 PM , Rating: 2
Ya, that is kind of way outlandish. Today, standard is definitely 4, with many higher end systems having 8gb as an option.


RE: Truth
By HostileEffect on 5/19/2014 5:29:15 PM , Rating: 2
The last two gaming laptops I have purchased, along with the last four media and netbook computers that have either been given to me or I skimmed off the top of trash bins.


RE: Truth
By p05esto on 5/19/2014 9:46:37 PM , Rating: 2
4GB standard? What are you smoking. 8GB is minimum if you are poor.... 16GB is standard on any halfway decent system. Nothing to brag about either... if you start opening a lot at once, editing video, high resolution image editing, etc that will get eaten up fast.


RE: Truth
By inighthawki on 5/19/2014 10:17:42 PM , Rating: 2
Are you talking about laptops? Because that was what the above poster was talking about. Most entry level laptops ship with 4GB of ram, and mid to high tier laptops ship with 8GB. The only ones I've ever seen with 16GB are gaming laptops, and I've never seen one with 32GB.

I also wouldn't go as far as to call memory "dirt cheap" or anything. $150 for 16GB is still well within the price range where most sane people would realize they'd rather have the money. You guys are making it sound like it's an impulse buy at the checkout in a store.

"16GB for $150, eh, might as well. I'll just slap that in my computer, no biggie"


RE: Truth
By retrospooty on 5/19/2014 11:54:05 PM , Rating: 2
What are you talking about... your average standard laptop today in 2014 comes with 4 gigs of ram. That is the standard average laptop starting point. It has nothing to do with your poor or not, we're talking about product specs and 4gb is the starting point. the question was never raised what should you get if you're doing video editing. It was a question about the average laptop starting point. Furthermore it's barely four gigabyte. Last year it was two gigabytes.


RE: Truth
By FITCamaro on 5/20/2014 7:54:14 AM , Rating: 2
RAM is definitely not as cheap now as it was a few years ago. I paid like $40-50 for the 8GB of Corsair XMS3 1600MHz memory I have in my gaming box. Same thing now costs $73 on Amazon.


typo in article
By Dr K on 5/19/2014 2:13:15 PM , Rating: 2
"It also packs 16 4-Gb GDDR5 modules (K4G41325FC-HC03) from Samsung Electronics Comp., Ltd. (KRX:005930) (KRX:005935), for a grand total of 8 GB of GDDR5."

Note that this should say "16 512-MB GDDR5 modules" which adds up to 8 GB, per the quoted source.




RE: typo in article
By shabby on 5/19/2014 2:38:58 PM , Rating: 3
Memory modules are always counted in bits not bytes, so 16x4Gbits is correct.


RE: typo in article
By NICOXIS on 5/19/2014 3:07:21 PM , Rating: 2
Correct,

Gb = Gigabits
GB = Gigabytes

Gb are different from GB. Modules of a given memory arrangement are usually measured in Bits and the total capacity in Bytes.


RE: typo in article
By AssBall on 5/20/2014 11:59:47 AM , Rating: 2
So is a Gb 2^10 bits, like is used for bytes (GB), or is it really a billion?


RE: typo in article
By purerice on 5/20/2014 1:16:59 PM , Rating: 2
It depends on what type of byte you're talking about.
http://en.wikipedia.org/wiki/Mebibyte


Glory days
By SteelRing on 5/19/2014 2:42:41 PM , Rating: 2
How we have forgotten back in the 20+ years ago when 486DX with 4MB Extended Memory System (EMS) was the BOMB!

And now 8GB is not enough.....kids today.. tsk tsk tsk.




RE: Glory days
By retrospooty on 5/19/2014 2:49:17 PM , Rating: 2
LOL... When I was your age, I paid $150 for a 4mb upgrade and I liked it!


RE: Glory days
By Proxes on 5/19/2014 4:01:58 PM , Rating: 2
Guess you didn't read the forums when the specs for the XBOX One and PS4 were released. People were all about how 8GB is overkill for 1920x1080 gaming.

There's too much "it's good enough" going on in the hardware and gaming communities, it's driven me to stop reading the drivel. Way too many 20 somethings out there that's only ever known Windows XP.

My PC has had 16GB of RAM for three years now, and my video card has 6GB. I welcome growth.


RE: Glory days
By CraigslistDad on 5/19/2014 4:19:51 PM , Rating: 2
"My PC has had 16GB of RAM for three years now, and my video card has 6GB. I welcome growth."

Could you expand on this?


RE: Glory days
By Proxes on 5/19/2014 6:44:16 PM , Rating: 1
i7-2600k on an ASUS P67 Deluxe, 4 x 4GB DIMMs.

GTX TITAN.


"I want people to see my movies in the best formats possible. For [Paramount] to deny people who have Blu-ray sucks!" -- Movie Director Michael Bay














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki