backtop


Print 88 comment(s) - last by testerguy.. on Mar 15 at 10:46 AM

Benchmarks or GTFO!

Yesterday when Apple unveiled the new iPad, the crew from Cupertino took some time to brag about its new A5X processor in comparison to NVIDIA’s Tegra 3. Apple certainly isn't widely known for offering up benchmarks on its own, so we'll likely have to wait until iPads land in the hands of reviewers and geeks around the web.
 
Apple used the iPad unveiling to boast that the A5X chip inside the new iPad is two times faster than A5, and four times more powerful in graphics performance than the Tegra 3.
 
 
NVIDIA isn't buying those claims without proof. The graphics company wants to know how Apple came by that number. Ken Brown, a spokesman for NVIDIA, stated, "[It was] certainly flattering " for Apple to compare its newest chip to their part.
 
Brown continued, “We don’t have the benchmark information. We have to understand what the application was that was used. Was it one or a variety of applications? What drivers were used? There are so many issues to get into with benchmark.”
 
Anyone that follows tech knows benchmarks are often handpicked to favor one particular brand over another when it comes to claims such as these. So it should be interesting to see if the new iPad’s performance lives up to the claims.

Source: ZDNet



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Proof
By nafhan on 3/8/2012 11:08:54 AM , Rating: 4
...Well sort of. GLBenchmark as run by Anandtech shows iPad 2/A5 as twice as fast as Tegra 3. The iPad "3" has literally twice the graphics execution units. So, I feel like it's a reasonable claim, with caveats.
http://www.anandtech.com/show/5163/asus-eee-pad-tr...

Caveats:
--Higher res. is great, but it effectively slows down the GPU. With 2.5X as many pixels to render, it seems possible that graphics performance on the iPad 3 will be effectively the same as the iPad 2 (rendering off screen, of course, it will be much faster).
--Different graphics benches have Tegra 3 and A5 much closer together. A5X will likely have more like a 2X or 3X advantage in real world situations, and if we take native resolution into account, probably even less than that.
--Applications dependent on CPU power, especially well threaded ones, will, likely, be MUCH less in favor of the A5X.




RE: Proof
By theapparition on 3/8/2012 12:22:02 PM , Rating: 2
And here is a balanced and thought out reply, folks.

Although any CPU operations will be in favor of the Tegra 3. Doubt there will be any scenario where the A5X will out CPU muscle it.


RE: Proof
By nafhan on 3/8/2012 3:57:00 PM , Rating: 2
quote:
any CPU operations will be in favor of the Tegra 3
Eh... maybe. I think the A5 has better memory throughput than Tegra, and I haven't seen confirmation on the CPU clocks. We can be fairly certain that Tegra 3 will run computationally intensive heavily threaded code better than the A5X. Beyond that, we'll find out for sure in about a week!

As an aside, I think that fast dual core plus a fast GPU is the way to go for this generation, and it seems like SOC vendors other than nVidia agree with me.


RE: Proof
By theapparition on 3/8/2012 4:17:03 PM , Rating: 3
quote:
As an aside, I think that fast dual core plus a fast GPU is the way to go for this generation, and it seems like SOC vendors other than nVidia agree with me.

But you forgot Qualcomm's imminent release of a quadcore part. Oh and TI and Samsung have quad core products in the pipeline as well.

Hmmmmmm.


RE: Proof
By nafhan on 3/9/2012 10:18:37 AM , Rating: 2
You mean the APQ8064? The only dates I can find for that indicate products shipping in 1Q 2013. And of course they're all developing quadcore parts, they'd be stupid not to. I said quadcore doesn't make sense now, not that it will never happen...

Right now, the products being shipped and the products that are about to start shipping will mostly have fast dual cores and beefed up GPU's (even ones using Qualcomm SoC's), like I said. 2 X A15 at 1.5Ghz with an Adreno 320 would be in my ideal device at this point.


RE: Proof
By testerguy on 3/9/2012 8:30:03 AM , Rating: 2
Why do you come to the conclusion that the new iPad has 2.5 as many pixel as the iPad 2?

The CPU dependant applications aren't really relevant to any claim made by Apple.


RE: Proof
By testerguy on 3/9/2012 8:39:47 AM , Rating: 2
Edit: I assume you were comparing to some nondescript Tegra 3 tablet in your sentence comparing iPad 2 to the new iPad? Just wasn't clear.


RE: Proof
By nafhan on 3/9/2012 3:54:14 PM , Rating: 2
quote:
Why do you come to the conclusion that the new iPad has 2.5 as many pixel as the iPad 2?
Uhm... Math?
It's not really a conclusion. Number of pixels = horizontal res. X vertical res.
quote:
The CPU dependant applications aren't really relevant to any claim made by Apple.
I can certainly come up with situations where rendering will be held back by CPU speed. Those situations might not be any more relevant than Apple's mysterious benchmark situation, but they exist.


RE: Proof
By testerguy on 3/14/2012 5:13:48 PM , Rating: 2
Uhm... Math?

'Math' would give you a figure of 4x the number of pixels, since there are twice as many horizontal and vertical. Because number of pixels is horizontal res x vertical res, remember?

quote:
I can certainly come up with situations where rendering will be held back by CPU speed. Those situations might not be any more relevant than Apple's mysterious benchmark situation, but they exist.


Those situations wouldn't be relevant. The claim Apple made is about the GPU being 4x faster. Even if it was held up and slowed down by the processor in EVERY scenario, the claims about the GPU would still be true.


RE: Proof
By kyp275 on 3/15/2012 1:57:42 AM , Rating: 2
You realize you're just arguing semantics right?

right?

While Apple did not specifically claim a 4x real world performance improvement, it's pretty obvious that's what they're implying to their potential customers, and that's what irks most people who have issues with Apple's claim.


RE: Proof
By testerguy on 3/15/2012 10:46:32 AM , Rating: 2
For me to be arguing semantics, means you are too, if you're arguing against me.

Honestly, to me, they didn't imply 4x faster real world performance. To me, they said, OK, our screen is awesome, so we had to shove in a GPU which is 4x higher.

To dumb people, clearly this may be misleading, but all companies are guilty of the same 'white lies'.

And we're not average customers - we're analysing the claim at a technical level.

Finally - arguably, even removing the semantics they are still correct, since the GPU can render at the lower resolution and upscale. Which WOULD mean up to 4x faster real life performance.


"We basically took a look at this situation and said, this is bullshit." -- Newegg Chief Legal Officer Lee Cheng's take on patent troll Soverain














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki