Benchmarks are programs, or apps, that repeat a single task or set of tasks to stress-test a system, and might be just a single complex calculation that repeats again and again; in general, you'd hardly ever do what they do. They're meant to highlight key performance differences between similar devices. But, in fact, they're not useful.
How could that be? Commendably, Ars Technica dug into the Note 3's system files and discovered one called "DVFSHelper.java", which contains a list of benchmarking apps.
If the system detects that one of a hard-coded list of apps is running, it turbo-charges the graphics processing unit (GPU), yielding that 20% boost. Then when the test is over, it scales it right back, because turbo-charging uses up the battery.
To prove this, the Ars team took the code from a benchmark called Geekbench and tweaked how it would present itself to the system. "Stealthbench", as they christened it, was the same benchmarking code, yet the Note 3's GPU didn't turbo-charge. No 20% boost – just the same performance as the LG.
Samsung has pulled this trick before with built-in lists of benchmark apps for its Galaxy S4, which yielded similar 20% speedups.
When queried, the company insisted: "It is not true that Samsung did benchmark boost. The Galaxy Note 3 maximises its CPU/GPU frequencies when running features that demand substantial performance and it was not an attempt to exaggerate particular benchmarking results."
So how, I asked, did it explain the difference in performance between Geekbench and Stealthbench? This provoked a very long silence (so long it's still unbroken, days later, as I write).
Unsurprising, really. The obvious interpretation is that Samsung is gaming benchmarks to rank highly on sites that use them.
Not that it's the only company doing this; Anandtech, a site where you could get the (wrong) impression that benchmarks are the only reason to own a gadget, showed that lots of Android companies do this; Samsung just happens to have a bigger list of benchmarks for which it ramps up the GPU or central processor to give a "better" result.
What a lot of pointless effort. For the average user – most all of us – benchmarks are essentially useless, because they don't tell you about the actual experience of using the device. Android handset CPUs used to benchmark ahead of Apple's iPhones. Yet reviewers kept reporting that scrolling and screen operations on Android was "laggy" or "jerky" — and smooth on the iPhone.
Why? Because Apple's software prioritised reacting to user input (the finger "pushing" a list). The processor might run slower, but the software prioritised the user – not other processes. And aren't we usually the most important process around a gadget?
Eventually, Google began Project Butter to deal with the problem of jerky list scrolling – though it took until June 2012 to release it (and more than half of Android devices in use still don't run the Android version that introduced Butter).
Benchmarks are easily fooled – and unreliable. Don't trust them. Ask what the device is like to use instead.
No comments:
Post a Comment