Professional Performance: Windows

Agisoft Photoscan – 2D to 3D Image Manipulation: link

Agisoft Photoscan creates 3D models from 2D images, a process which is very computationally expensive. The algorithm is split into four distinct phases, and different phases of the model reconstruction require either fast memory, fast IPC, more cores, or even OpenCL compute devices to hand. Agisoft supplied us with a special version of the software to script the process, where we take 50 images of a stately home and convert it into a medium quality model. This benchmark typically takes around 15-20 minutes on a high end PC on the CPU alone, with GPUs reducing the time.

Agisoft PhotoScan Benchmark - Total Time

Cinebench R15

Cinebench R15 - Single Threaded

Cinebench R15 - Multi-Threaded

Professional Performance: Linux

Built around several freely available benchmarks for Linux, Linux-Bench is a project spearheaded by Patrick at ServeTheHome to streamline about a dozen of these tests in a single neat package run via a set of three commands using an Ubuntu 14.04 LiveCD. These tests include fluid dynamics used by NASA, ray-tracing, molecular modeling, and a scalable data structure server for web deployments. We run Linux-Bench and have chosen to report a select few of the tests that rely on CPU and DRAM speed.

C-Ray: link

C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.

Linux-Bench c-ray 1.1 (Hard)

NAMD, Scalable Molecular Dynamics: link

Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.

Linux-Bench NAMD Molecular Dynamics

NPB, Fluid Dynamics: link

Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.

Linux-Bench NPB Fluid Dynamics

Redis: link

Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a b developer base, but also relies heavily on memory bandwidth as well as CPU performance.

Linux-Bench Redis Memory-Key Store, 1x

Linux-Bench Redis Memory-Key Store, 10x

Linux-Bench Redis Memory-Key Store, 100x

CPU and Web Performance Gaming Benchmarks on Processor Graphics
Comments Locked


View All Comments

  • nathanddrews - Thursday, December 11, 2014 - link

  • Khenglish - Thursday, December 11, 2014 - link

    So you think these CPUs really are better binned? An undervolted K series cannot always pull off the voltages at the same clocks as a S series?

    If so do you think these better binned chips finally at least match Ivy Bridge in terms of performance per watt?
  • casteve - Thursday, December 11, 2014 - link

    No, they aren't better binned. Another site looked at the voltage vs. freq curve and found that the std TDP, S, and T parts all followed the same curve. That i7 S part looks like an oddball.
  • Samus - Thursday, December 11, 2014 - link

    I work with HP Elitedesk 800's all the time with I5-4570S CPU's. They're incredibly small and quiet, much more so than the identically sized USFF dc7900 Core 2 Duo's they replaced.
  • name99 - Thursday, December 11, 2014 - link

    We constantly hear about how aggressively Intel bins parts, how each model is a special snowflake that's exactly optimized for its role, etc etc. I've yet to see any evidence that this is actually true (as opposed to "Intel engages in very aggressive market segmentation --- by product name".

    The primary reason I'm not convinced is that no-one else bins nearly as aggressively. Apple, never a company to miss the opportunity for a dollar, doesn't engage in some obvious binning (eg ship the iPhone6+ at 100MHz faster; or even give you a 100MHz speed boost in each model as you go from 16GB to 32GB to 64GB storage). Qualcomm offers a fairly limited palette of Snapdragon speeds. Samsung, the master if there ever was one, at slicing and dicing phone models, doesn't offer the same phone at speeds of 1, 1.5, and 2GHz; etc etc.

    We have to assume that
    - everyone else's processes are crazy uniform compared to Intel OR
    - Intel is MUCH smarter than anyone in how they are able to bin OR
    - binning (at the micro segmentation Intel offers) just is not a real thing
    and the third option seems the most plausible to me.
  • Samus - Thursday, December 11, 2014 - link

    Almost nobody pays attention to GHz numbers in mobile devices. Nobody really cares. And the scaling with ARM really means nothing. Apple consistently has among the highest performance ARM CPU's yet they're lower clocked and lower core count than everyone else. Binning ARM CPU's would require two things in order to be profitable: real-world benefits to a slightly higher clock speed, and marketing the higher clock speed as worth the premium. Currently there are neither. I'd guess 99/100 people don't even know the clock speed of the phone they own, because that's how irrelevant it is. For many applications (such as gaming, where performance is not consistent across the majority of devices) the GPU matters more than the CPU because of how heavily optimized these apps are for the GPU.

    The PC landscape is totally different.. You still have PC's sold that have 1/10th the performance of a Core i7.

    Now, where your idea could be interesting is if they sell an "eco" chip that runs at a lower voltage due to binning. People MIGHT be willing to pay extra for a phone with +20% battery life.
  • Kjella - Friday, December 12, 2014 - link

    Or perhaps the simplest and most obvious explanation - Apple feels they're more in the console game than the PC game. Offer one consistent level of performance across all iPhones of the same generation and that's the spec all developers need to relate to.
  • Hrel - Friday, December 12, 2014 - link

    - everyone else's processes are crazy uniform compared to Intel OR
    - Intel is MUCH smarter than anyone in how they are able to bin OR

    Those are both true.
  • wumpus - Friday, December 12, 2014 - link

    So Intel chips can't be overclocked and produce more watts than different lettered processors under identical conditions? That isn't what was tested and would be a rather shocking development.

    Chips take a considerable time to fab. Markets change fast and somehow Intel manages to produce what the market needs in the face on negligible competition? Yea, I really believe that they are really binning and not simply segementing to what marketing wants.
  • BSMonitor - Friday, December 12, 2014 - link

    Use case. A PC's use case is an entirely different world than mobile phones. Your anti-Apple bias aside, what applications would users engage in on their smart phone where CPU performance could be noticeably segregated by clock speed. In this space. The only indication of CPU performance is the "snapiness" of the response from whatever app you are in.

    In a PC sense, I could launch an application or task that takes minutes, hours, etc.. 200-300 MHz would be noticeable over the course of an hour of video compression.

    Apples and oranges.

Log in

Don't have an account? Sign up now