Comments Locked

44 Comments

Back to Article

  • qasdfdsaq - Friday, March 25, 2016 - link

    This is why I love Anandtech.
  • ImSpartacus - Friday, March 25, 2016 - link

    I know, right? It's ridiculous.
  • BurntMyBacon - Monday, March 28, 2016 - link

    Woot!
  • danjw - Monday, March 28, 2016 - link

    I couldn't agree more.
  • PrinceGaz - Friday, March 25, 2016 - link

    Excellent article, very informative and going forwards I can see this device being a great tool for comparing wifi performance of mobile devices.

    One minor error I think I've spotted in para 7 of the final page: "up to 6 bits per “slice”, which means that there are 256 potential combinations of phase and amplitude". That should be 8 bits per slice (4+4 bits, 16x16 combinations = 256QAM). Assuming I understand how QAM works.
  • extide - Friday, March 25, 2016 - link

    I think you are right, 2^6 = 64, 2^8 = 256, 64QAM is 6 bits, and 256 is 8.
  • JoshHo - Friday, March 25, 2016 - link

    I'm not sure how I convinced myself 2^8 = 64, but I've fixed the issue.
  • dananski - Tuesday, March 29, 2016 - link

    In base 42, you're fiiine!
  • drpatterson - Tuesday, May 10, 2016 - link

    Marry me!
  • zepi - Friday, March 25, 2016 - link

    I suppose this is the "old" huge ipad pro, not the new one? I understand the article was started before small pro was introduced, but especially considering picture with two ipads, it would be good to specify.

    Amazing article, i will dive deeper into it with better time to refresh my radiocommunications knowledge. Maybe next time we should dive even deeped and start with maxwell equations ;)

    Where can I pay money for such gorgerous content to?
  • Ryan Smith - Friday, March 25, 2016 - link

    Correct, it's the 12.9" iPad Pro.
  • plext0r - Friday, March 25, 2016 - link

    Excellent article! Thanks for the write-up using professional-level WiFi testing.
  • jardows2 - Friday, March 25, 2016 - link

    Very interested to see this in use. In my work, I daily have to deal with people's wi-fi problems, and to see some of the insights this tool can use will be very enlightening. Trying to fix people's wi-fi over blind phone support is an excercise in frustration!
  • Ravn - Friday, March 25, 2016 - link

    Excellent stuff Anandtech. WiFi is usually in the specifications described by : WiFi: Yes, Bands: 2.4/5GHz a/b/c/g/n, Maximum throughput: xxx Mbps. And it says just about nothing about the quality in the WiFi unit. Finally some relevant RF data that describes how the WiFi performs in real life. Thank You!
    An additional test that could broaden the relevans of the WiFi testing, could be how the WiFi unit performs with a lot of BlueTooth units in the same area. BT's nasty frequency hopping nature in the whole WiFi band results in a lot of problems in WiFi setups. How the WiFi units handles this could be very interresting to include.
  • zodiacfml - Friday, March 25, 2016 - link

    Awesome and powerful testing machine you have there. One "Small" Wi-Fi testing website that I read regularly would be interested too. Yet, too powerful that only electrical engineers would use most of its functions.

    If I'm correct, you posted before that this device can also test for MU-MIMO performance without too much difficulty. Wi-Fi AP and routers reviews on Anandtech in the future wouldn't hurt? :)

    On a second thought, I think there is a brighter future for 802.11ad than say, MU-MIMO. As long it is line of sight or no obstruction, 1 Gbps is easy for this standard.
  • name99 - Friday, March 25, 2016 - link

    You've left out some interesting aspects of the physical layer.
    An essential part of this layer is the Forward Error Correction (FEC), which augments the transmitted data with additional data in such a way that if a few bits in the stream are in error, they can be recreated from the remaining bits (think parity on steroids).

    These error correcting codes have been improved over the years in successive specs as it's become feasible to throw more computation at them, with the current state of the art being so-called LDPC (low density parity code). [These same codes are currently used by a lot of flash vendors, but have a theoretical problem(trapping sets) that limits their effectiveness above certain noise levels, so better alternatives have been proposed (but as far as I know are not yet in production) for flash, and likely will follow in the next big WiF spec.]

    The specifically interesting thing about these codes, in the context of this article, is that it's not THAT useful to simply say that a chipset implements LDPC (or some other FEC). Implementing the encoding is a fixed algorithm that you can't really get wrong, but there are many ways of implementing a decoder (in other words, ways of attempting to construct the correct data stream from a corrupted data stream). These methods, of course, differ in the power they require, how much computation they utilize, how long they take to correct errors, and how complicated they are to implement.
    The real difference in performance of different chipsets (at the L1 level) is in how well their FEC decoders work. That's where the magic lives.

    At the next level up (the MAC level) its is crazy how much performance is lost because of the decentralized/unco-ordinated nature of the media access protocol.(This is the CSMA/CA that the article mentions.)
    Even in the simplest real world case of one base station and one device, you're losing 35% or so of your goodput to the MAC protocol, and it rapidly drops to 50% and then worse as you add just a few devices. The successive specs have tried various schemes (primarily using the logical equivalent of very long packets) to limit the damage, but all this has done is really keep things standing still so that the situation in each successive spec is not worse than in the previous spec. LTE can be vastly more efficient because it provides for a central intelligence that co-ordinates all devices and so does not have to waste time on guard intervals where everyone is looking around making sure that no-one else is talking or getting ready to talk.

    I don't understand why 802.11 has been so slow to adopt this model; putting the controlling intelligence in the base station (or base station equivalent in a peer network) and having every other device act as a slave. They're going to HAVE to go there at some point anyway --- they've pretty much run out of every other performance option --- and avoiding doing so in 802.11ac just means five more years of sub-optimal performance.

    [You can see this in the iPad Pro numbers. The BCM4355 supports 80MHz channels, and so a maximum PHY rate of 866Mbps, But the best you see is just under 600Mbps (and that performance is only available when transmitting extremely large packets in only one direction); the gap between this and the PHY rate is because of time wasted doing nothing but sitting around following the MAC protocol. This disappointing goodput compared to PHY rate is not due to the mythical "interference" that is blamed for every random radio issue; it is due to the design of the MAC protocol.

    You also see performance fall as the received signal strength falls. This performance drop is ALSO not due to interference, unless you want to make that word meaningless. The correct term for this performance drop is that it is the result of noise, or more precisely a falling SNR (signal to noise ratio). As the signal to noise ratio falls, you can pack fewer bits into each fragment of time (ie you have to switch from using QAM256 down to QAM64 down to QAM16) and you have to use more of those bits as error correction bits rather than as actual data bits.]
  • Denbo1991 - Friday, March 25, 2016 - link

    802.11ad and 802.11ax will have some centralized scheduling features to cut down on the overhead you talk about, especially in the context of many devices on one AP or many overlapping APs.
  • zodiacfml - Saturday, March 26, 2016 - link

    There's way a lot more to talk about here than possible.
  • alanore - Saturday, March 26, 2016 - link

    Definitely some good points that should be covered. It might be worth covering how older low speed devices can consume a large proportion of the air time and thus performance.

    Also in the article it might be worth calling out spatial streams and how they effect performance. In the article it was an apples for apples comparison (2x2 Vs 2x2) but I guess soon we might see a poorly performing 3x3 laptop getting similar results to the iPad pro.
  • Ratman6161 - Monday, March 28, 2016 - link

    Interesting but....for most of us does it really mean anything? So an iPad can achieve 600 Mbps throughput. How does this help me when wireless is used nearly exclusively for accessing the internet and my ISP provides 60 Mbps? For home use, I'm more interested in how well are things working when I have my Android TV streaming an HD Netflix movie while in the other room my wife is doing the same on Amazon and we are also both web surfing on either a tablet or a laptop...and that's more about the router than the individual devices, isn't it?

    Even at the office, no one is doing anything that requires 600 Mbps or even the 300 of the Pixel C (and the connection in/out of our building is only 20 Mbps). Its more a question of how many devices we can get connected simultaneously at a reasonable/usable speed.
  • JoshHo - Tuesday, March 29, 2016 - link

    The important part with all half-duplex technologies is to understand that while maximum throughput is a nice figure to have, it's more a statement of spectral efficiency. Your connection past the LAN may only be 20 Mbps, but traffic within the network can exceed 20 Mbps and higher throughput means that the spectrum occupied is available more often for other users.
  • will1956 - Tuesday, March 29, 2016 - link

    i like to think i know somethings about electronics etc, then I read a article like this or read a comment like the above and then realise i really don't know much about the technicalities of computers etc
  • evancox10 - Friday, March 25, 2016 - link

    I have no doubt that Apple is running these types of tests, looking at how they affect the user experience, and then improving the areas that are weak. Whereas the rest of the consumer electronics industry thinks it's sufficient to throw in the latest chipset from the vendor, run a synthetic benchmark showing it's faster, and then slap a large number on the specs sheet. Without ever just trying out the stupid things to see how they work.

    I often wonder if Samsung, Motorola, etc. ever give their devices to their executives before releasing them to just *use* for a week or two, and have them report any major issues. Judging by the large number of problems I discover in so-called flagship devices, I seriously doubt it. Or, if they do discover these issues, maybe the engineers are just clueless as to how to fix it.

    Rumblings/rumors from people in the industry suggest that smartphone design at Apple is driven by measurable/controlled user experience tests (e.g. mimic a finger swipe and objectively measure the response), but driven by synthetic server benchmarks (SPECmark) from the 1980's at other companies.

    In other words, not surprised at the excellent performance here by the iPad, especially in the handoff test. The difference is astounding.

    And this comes from someone who doesn't currently own or use ANY Apple products, the exact opposite of an Apple fanboy.
  • Daniel Egger - Friday, March 25, 2016 - link

    > I have no doubt that Apple is running these types of tests, looking at how they affect the user experience, and then improving the areas that are weak. Whereas the rest of the consumer electronics industry thinks it's sufficient to throw in the latest chipset from the vendor, run a synthetic benchmark showing it's faster, and then slap a large number on the specs sheet. Without ever just trying out the stupid things to see how they work.

    Absolutely correct. Where pretty much all other companies try to impress by raw performance data and every now and then throw in an oddball like partially perfect design or a remarkable attention to detail here or there Apple always tries to cover as many bases as possible at once. The desperation and despair this causes at the competition can be easily seen by out of proportion "scandals" like the death grip, bend-gate and others...

    The truth is: If you don't even aim for perfection you're definitely not going to reach it. Apple is one of the few companies I know who at least try -- VERY hard.
  • zodiacfml - Saturday, March 26, 2016 - link

    Impressive, it is. Yet, I feel Samsung also does as their flagship devices often show good performance, at least in terms of Wi-Fi.
  • skavi - Sunday, March 27, 2016 - link

    My S6 has abysmal roaming performance, but good WiFi speed. Sometimes I feel like Samsung only care about the things it can put on a spec sheet (octa core, four gigs, quad hd, etc.). This is what's driving me away from Samsung towards Apple who seems to pay an obscene level of detail to every part of the experience.
  • will1956 - Tuesday, March 29, 2016 - link

    yeah thats what i always think of Apple. Attention to the small details.
    Details may be small but a lot of small problems result in big problems.
    Another thing i like about Apple is that they like Quality over Quantity, something the Android manufacturers are finally realising, e.g. Qualcomn with its quad core 820 which is about twice as good as the octa core 810, and better than the deca cores Mediateks.
  • DarkXale - Tuesday, March 29, 2016 - link

    The iOS division perhaps - the Mac devision sure as hell isn't though, considering the subpar performance and stability of Macs on a lot of networks. (Networks which iPads don't struggle with)
  • will1956 - Tuesday, March 29, 2016 - link

    that's interesting to know. Thanks
  • Oubadah - Friday, March 25, 2016 - link

    The Pixel C's Wi-Fi is broken: https://productforums.google.com/forum/#!topic/nex...
  • JoshHo - Saturday, March 26, 2016 - link

    I was aware before the testing that the Pixel C had something wrong with WiFi, but the goal was to try and separate software issues from hardware ones.

    It was subjectively obvious but difficult to understand what was causing the problems in a reproducible manner.
  • at80eighty - Saturday, March 26, 2016 - link

    Have to concur with everyone. Happy to see anandtech still keeping that edge to their content. Great work Joshua.
  • mannyvel - Sunday, March 27, 2016 - link

    We have one of these systems at work, and you have to be a wifi genius to use it.

    That said, what our guy said was this: test your throughout with multiple clients at multiple distances. There also some subtleties about wifi that are odd. One is that the anount of airtime that a given adapter uses is more important sometimes than its actual speed. There is only so much airtime, and a badly behaving adapter that's further away will prevent faster devices with better signal from using the AP because of how PHY rates are negotiated (not sure what the term is). Basically an adapter further away will scream louder and more slowly to see if the AP can hear it - but the slow screaming takes airtime away from the faster devices that have better signal.

    Also, since it's RF more clients means more interference. I suspect you will find, as we did, that all your wifi testing has been completely wrong for pretty much forever. With the ixia stuff you can tell how bad your AP really performs in, say, and apartment-like configuration.

    One more confounding factor is not all chips and drivers behave consistently. An AP that works great with Windows 10 and an atheros chipset may work crappily with the same chip in win7...as you've seen in the iPad Pro tests.
  • bman212121 - Monday, March 28, 2016 - link

    I agree 100% with everything mannyvel said. One of the biggest hurdles people face is range vs density. Everyone thinks that throwing a bigger antenna will give you better wifi and that having an AP with increased range is a good thing. In reality, most APs these days are turned down in power so they match closer to the clients capabilities, and in the case of roaming a minimum RSSI is enforced. Only having "1 bar" hurts overall performance more than having a second AP that can provide higher signal strength. We used to have one high powered Aironet that could cover a huge area. In order to increase performance you might take down that one AP and put up 3 in it's place to get the proper coverage while getting the performance you need.

    I'd really be interested in some minimum RSSI tests. A great performing antenna might not have the highest power, but you can easily make up for power by having better sensitivity. Even APs of the same power output will have different receiving abilities, and higher end APs can pickup and work with much weaker signals and still maintain solid connections.
  • skavi - Sunday, March 27, 2016 - link

    Holy shit, I can't wait for these WiFi performance and roaming tests for more devices. This is so important, but there is rarely any information specific to devices. My S6 is never able too roam correctly and forces me to reconnect each time I move somewhere, maybe these kinds of tests will pressure OEMs into paying more attention to these issues. It seems Apple at least has gotten it down.
  • Powervano - Monday, March 28, 2016 - link

    Very nice and informative article!
    Could you please test Surface Book and Surface Pro 4 using the WaveDevice? Would be nice to shed some light around the Wi-Fi issues on these devices.
  • Conficio - Monday, March 28, 2016 - link

    Thanks Josh,
    you are on your way to become a hero for mobile device users. I appreciate the hard work flowing into this article.

    I wonder which parts of this stack are software updatable? As there is talk of the Pixel-C having a broken Wifi, is there hope it gets fixed by a software update? In the same interest did I miss the specification of the exact relevant software versions on the devices?

    P.S.: The Baseband package on my device influences WiFi or is it only for the WAN portion of my Android device?
  • cuyapo - Monday, March 28, 2016 - link

    Great article, thanks! One thing only:
    CSMA/CA = Carrier Sense Multiple Access with Collission Avoidance
  • James5mith - Tuesday, March 29, 2016 - link

    Still reading through the article, but it's worth noting that SmallNetBuilder has also switched to this test setup for wireless testing going forward. Interestingly the one thing this doesn't take into account is antenna design. While it's great to test throughput vs. attenuation/power/etc. None of that takes into account if the antenna is poorly designed, oriented incorrectly, etc. It does give a good baseline to work from though. And it should help people track down problems for wifi if they are using a router that tests solidly without taking factors like antenna design into account.
  • James5mith - Tuesday, March 29, 2016 - link

    http://www.smallnetbuilder.com/wireless/wireless-f...
  • profquatermass - Tuesday, March 29, 2016 - link

    So, no setting up a typical office or home space to test in?
  • LostWander - Wednesday, March 30, 2016 - link

    The tool allows for a more consistent simulation what happens in those environments, if a device does well during testing with this tool it's actually more reliable indicator of performance in multiple environments then testing in a "typical" office set up. Consistent reproducible results. Science!
  • renjithis - Wednesday, March 30, 2016 - link

    Last page 3rd last paragraph expansion of CSMA/CA should be Collision Avoidance instead of Carrier Avoidance?
    Sorry it took so long.

    Please delete this comment after correction or if i was wrong
  • Rollo Thomasi - Monday, April 4, 2016 - link

    When testing range/thoughput you only talk about varrying the transmission power. Could it not be worth it to test varrying the latency as well to simulate distance to the access point?

    Also introducing av frequency shift to simulate the dopler effect of a receiver moving relative to the transmitter could be interesting.

    These are just thoughts of mine. Maby the effects of latency and frequency shifts are so small that it doen't matter.

Log in

Don't have an account? Sign up now