Print 88 comment(s) - last by testerguy.. on Mar 15 at 10:46 AM

Benchmarks or GTFO!

Yesterday when Apple unveiled the new iPad, the crew from Cupertino took some time to brag about its new A5X processor in comparison to NVIDIA’s Tegra 3. Apple certainly isn't widely known for offering up benchmarks on its own, so we'll likely have to wait until iPads land in the hands of reviewers and geeks around the web.
Apple used the iPad unveiling to boast that the A5X chip inside the new iPad is two times faster than A5, and four times more powerful in graphics performance than the Tegra 3.
NVIDIA isn't buying those claims without proof. The graphics company wants to know how Apple came by that number. Ken Brown, a spokesman for NVIDIA, stated, "[It was] certainly flattering " for Apple to compare its newest chip to their part.
Brown continued, “We don’t have the benchmark information. We have to understand what the application was that was used. Was it one or a variety of applications? What drivers were used? There are so many issues to get into with benchmark.”
Anyone that follows tech knows benchmarks are often handpicked to favor one particular brand over another when it comes to claims such as these. So it should be interesting to see if the new iPad’s performance lives up to the claims.

Source: ZDNet

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Apple Picked the Wrong @!**@fight
By testerguy on 3/9/2012 11:49:13 AM , Rating: 2
It's only non-relevant if the iPad3 will run at lower resolution, which it wont.

You don't understand how GPU's are benchmarked, then.

This is similar to how the iPhone4 GPU benchmarked lower than the iPhone3GS. This was due to the screen resolution doubling (sound familiar?) while using the same exact GPU.


So while the MP4 is certainly faster , the only thing that matters is real world use. And you have to take that higher resolution into account. You don't test at lower resolution to try to compare, you test exactly how the devices are meant to be used.

This is not true in the context of Apples claim. For their claim to be true, the GPU simply has to be 4x faster in the same scenario , which means the same resolution. The bold part in your sentence is the part Apple claimed. The rest is a rant about how resolution impacts FPS - that doesn't change the GPU performance. Furthermore, performance arguably has to include resolution, since as I've said in another post, 100 FPS at a resolution of 10 x 10 isn't good performance. To confuse FPS with graphics performance is your next failing.

And in the end, with the iPad3, you'll get a screen that is much higher resolution, looks better, and still has a graphics advantage over currently released Android tablets. I just think it's not going to be as rosy as Apple claims.

I think what Apple claimed and what you heard are two different things.

Screen resolution doubled (4x more pixels). If the new improved GPU system (including memory, speed, etc) can't push all those pixels fast enough, the user experience is going to be compromised (eg, laggy operation). It doesn't matter if it's twice as fast at the same lower resolution, it only matters if it can handle the new higher resolution and provide the same or better experience. So in a nutshell, we absolutely want to see if the new GPU can't keep up. Users don't care about theoretical performance at lower resolution, they only care if their shiny new toy works better than their old one.

All of the above is analysing the performance of the iPad as a whole. It has nothing to do with benchmarking the GPU, it's a measure of user experience. Those are not the same thing

Whether the GPU is connected to a 50 mile monitor with 200 trillion pixels, or a 10x10 display, it is still the same GPU, and it still has the same performance relative to other GPU's.

And then you say 'Get with the program.'? Talk about clueless.

RE: Apple Picked the Wrong @!**@fight
By testerguy on 3/9/2012 12:00:20 PM , Rating: 2
I've just seen you compared the iPhone 3GS to iPhone 4 (rather than 4 vs 4S) and said they benchmarked differently.

They have the same GPU - it benchmarks exactly the same.

The links do still serve a purpose in proving that benchmarks are at a constant resolution.

RE: Apple Picked the Wrong @!**@fight
By theapparition on 3/9/2012 1:42:14 PM , Rating: 2
Glad you finally realized that.

Here's the phone benchmark on Anandtech. Since you seem hung up on only caring about individual component performance, try taking a look at how the phone performs.

iPhone3GS: 14.5
iPhone 4: 5.9

iPhone3GS: 24.5
iPhone 4: 16.2

From Anandtech's own commentary:
Keep in mind that with GLBenchmark 2.0 we still cannot run at any resolution than native – in this case 800x480 (WVGA) – and the same applies for other devices in the suite, they're all at respective native resolutions.

The reason for the iPhone 4 lagging iPhone 3GS is display resolution, which unfortunately right now we can't test at anything other than native.<\quote>

By testerguy on 3/14/2012 4:54:18 PM , Rating: 2
The comment I made about comparisons of iPhone 4 to 4S still prove my point every bit as much. All of them, and benchmarks against other phone GPU's, are all at the same resolutions, not native resolutions.

I can't believe you posted that quote from Anandtech, which actually further proves my point.

'Unfortunately right now' they can't test on 'anything other than native'. That's an APOLOGY. Because, unlike you, Anand knows how proper benchmarks work.

Here's another quote from him:

they're all at respective native resolutions. GLBenchmark 3.0 will fix this somewhat with the ability to render into an off-screen buffer of arbitrary size.

Can you read that? Keyword - FIX. In all subsequent reviews where there has been an option to keep the resolution constant, he has.

As for why it's wrong, I will approach in my reply to your other failed argument on this topic.

RE: Apple Picked the Wrong @!**@fight
By theapparition on 3/9/2012 1:28:29 PM , Rating: 2
You don't understand how GPU's are benchmarked, then.

I understand completely. You don't understand that people don't care about benchmarks on equal footing, only how a device performs in it's configured state. Only a complete moron would believe a 4core part of the same series would perform worse than it's two core counterpart on equal settings. I've never said anything to the contrary. The only thing that matters is if the new part works better at the higher resolution.

What part of that don't you get?

I can't possibly believe you're arguing this. It's pointless, give it up.

Your comment below about testing an engine on equal ground shows your complete lack of knowledge on the real world. The whole package matters, not trying to interpret the sum of it's components.

I can see you in a product design meeting right now.

You: "Yes Mr. Product Manager, the overall system is slower, it's quite laggy. But, but.....the GPU is faster. It's twice as fast as the old product tested on the same settings."

PM: "But we don't have the same settings, do we? Didn't you design to the human factors requirements definition?"

You: "But I can product a benchmark that shows it's twice as fast at the old settings"

PM: "And that leaves us with an unusable product"

You: "I don't care, it's twice as fast"

Boss: "Clear out your desk. Security!"

And for the record, I don't trust either Apple's or Nvidia's claims, which have been wildly inflated in the past. No amount of your doublespeak will justify those claims.

By testerguy on 3/14/2012 5:07:32 PM , Rating: 2
I understand completely.

No, you don't. You seem to think that GPU benchmarks are a 'human factors' metric. Guess what, they aren't. They are what they are described as - a way of comparing the relative capabilities of GPU's. No more, no less.

You don't understand that people don't care about benchmarks on equal footing

My point doesn't depend on what people 'care' about. It depends on the reality of how benchmarks work. Again, you're making a fundamental mistake in not realising what GPU benchmarks are. Your point is essentially that you don't care much about GPU benchmarks. That doesn't mean that the way benchmarks are carried out changes. Native resolutions is another, different, and equally flawed metric, in other ways. It is not the same thing to test in native resolutions as it is to benchmark a GPU when keeping all other variables constant.

The only thing that matters is if the new part works better at the higher resolution.

Again, you refer to what 'matters' - instead of what constitutes a GPU benchmark. You are, again, confusing your flawed belief that GPU benchmarks don't matter with meaning that the fundamental method of calculating benchmarks is different.

Your comment below about testing an engine on equal ground shows your complete lack of knowledge on the real world. The whole package matters, not trying to interpret the sum of it's components.

Actually, your ignorance to what I'm saying shows a complete lack of ability to read. When you benchmark GPU's, you want to do precisely that, benchmark a GPU. Benchmarking a DEVICE , as opposed to a GPU , at it's native resolution, is a VERY DIFFERENT METRIC. It is also just as flawed. As in the other examples I've given, is 100 FPS native resolution of 10 x 10 better than 60 fps at 2000 x 1000? Of course not. Clearly, then, native FPS isn't the only relevant metric. More relevantly, in this case, Apples claim specifically refers to the GPU, and not to 'native performance'. So you're wrong on multiple levels.

Here's a further reason why you're wrong. Just like the iPhone 4 could, the new iPad can actually render games at the resolution of the iPad 2, and scale it up. So it's 'native' resolution can actually change, depending on what the game designer builds. This gives the developers the choice of either 4x the resolution, or 4x the performance, or some mix of the two. What the GPU benchmarks tell us, is that at any given graphics quality, the new iPad GPU will be up to 4x faster. This is clearly backed up by the benchmarks which show the iPad 2 as significantly faster than Tegra 3 already, and the logical deduction that the new GPU is 2x faster. Again, something we learn through GPU BENCHMARKS. If we took your flawed method we would conclude that the GPU in the iPhone 4 is worse than the iPhone 3GS. Clearly a nonsense.

It's beyond belief that you are in complete denial about the established and accepted method of benchmarking GPU's, you're absolutely delusional. Your whole argument about the experience of the device as a whole is nothing to do with GPU benchmarks, and that you confuse the two is the whole foundation of your failure. Your failed 'conversation' also fails to factor in the fact that you can't simply compare speed, even when looking at the device experience, because the higher resolution can make it worth sacrificing FPS in certain situations.

“We do believe we have a moral responsibility to keep porn off the iPhone.” -- Steve Jobs

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki