Testing Overview

In total, we used three different testing locations, an “ideal” setup with the wireless router less than ten feet from the test laptop(s), and two different “distant” locations where the laptop is located several rooms away, with doors closed in between and several sheetrock walls. We also tested with two different routers, a Netgear WNR3500L and a Linksys E4200. At each location, we ran four different networking related tasks. The tests had the server PC connected directly to the Gigabit router via a six-foot CAT6 Ethernet cable, and the test PCs were otherwise idle (no extra processes, services, etc. were running).

The first task consists of copying files from the server to the test laptop. Here we have two different scenarios; the first is a single large file, a 2.2GiB HD movie. We time how long it takes to copy the file and then calculate the throughput in Mbps (1Mb = 1,000,000 bits). The second scenario consists of numerous small files. We use the contents of the Cinebench R10 and Cinebench R11.5 folders, which have a mix of a few larger files with many small files. In total, this test has 8780 files and 511 folders, with the total file size coming to 440MiB. Again we time the copy process and then calculate Mbps. Copying files between PCs is a completely real-world scenario, and with SSDs present in all of the test systems we should be bottlenecked by the networking performance (even a moderate HDD should be able to outpace WiFi speeds, but for Gigabit Ethernet you’ll want SSDs).

The remaining three tests are more theoretical rather than real-world. First up, we have the NTttcp utility. Unlike file copying, this test transfers data from memory between PCs, so the storage subsystem is out of the loop. We run two different scenarios, once with the laptop acting as “client” and the desktop being the “server”, and the second test with the roles reversed. This will test the maximum theoretical throughput of the laptop in both transmit and receive modes; most of the wireless devices do better at receiving vs. sending, but in a few instances the reverse holds. We used the following commands on the client/server:

ntttcpr.exe -m 6,0,[Server IP] -a 4 -l 64000 -n 4000
ntttcps.exe -m 6,0,[Client IP] -a 4 -l 64000 -n 4000

Our third test is very similar to the above, only we use Netperf instead of NTttcp and we conduct TCP as well as UDP testing. We use a precompiled version of Netperf that Bigfoot sent, but a Windows version is available from Chris Wolf that shows essentially the same results. One of the things to be aware of is that Netperf has a variety of options; we tested with the “stock” options, though Bigfoot has a second set of recommended command line parameters. The throughput can be very different depending on what options you specify and what wireless card you’re using. Several of the systems we tested with the Bigfoot parameters performed horribly on the UDP test, so we decided to stick with the stock Netperf command (though we’ll show the performance with the Bigfoot settings on the E4200 Ideal page as a reference point). The default options simply specify the IP address of the host, along with “-t UDP_STREAM” for UDP testing; the Bigfoot recommended commands for TCP and UDP are:

netperf.exe -l 20 -t TCP_STREAM -f m -H [Server IP] -- -m 1460 -M 1460 -s 4000000,4000000 -S 4000000,4000000
netperf.exe -l 20 -t UDP_STREAM -f m -H [Server IP] -- -m 1472 -M 1472 -s 4000000,4000000 -S 4000000,4000000

The key difference between the stock options and the Bigfoot parameters is the selection of a 1472 byte packet. The default UDP packet size is 1000 bytes, so that’s what most manufacturers optimize for, but larger packets are possible. Bigfoot states that video streaming often uses larger packets to improve throughput, so they’ve worked to ensure their drivers perform well with many packet sizes. Skip to the “ideal” testing with the Linksys router for more details of our Netperf UDP testing with the Bigfoot parameters.

Our final test is a measure of latency, and you might be tempted to take the results with a grain of salt as we’re using a utility provided by Bigfoot called GaNE (Gaming Network Efficiency). The purpose of this utility is to measure real latency between a client and a server, down to the microsecond, while simulating a network gaming workload. Most games don’t support any logging of network latency, so GaNE is an easy to use substitute. It sends a 100 byte UDP packet every 200ms, with a timestamp included in the packet. The client receives the packet and sends it back, and by looking at the timestamp the server can determine roundtrip latency. According to Bigfoot, the majority of network games send ~100 byte packets at intervals ranging from every 50ms to a few seconds apart, so they chose a value that would represent a large number of titles. While not all games are the same, there are long-established “best practice” coding standards for network gaming, so a lot of titles should behave similar to GaNE. For the test, two laptops connect to the server at the same time and GaNE reports average latency along with the average latency of the worst 10% of packets—the latter being a better measurement of jitter on your network connection.

Besides GaNE, we conducted some informal testing of latency with other utilities. First up is the Windows ping command; we saw latency spikes that are similar to those in GaNE so the results of both utilities appear valid. Ping doesn’t work like most games, however—it has smaller packets sent less frequently and it uses the ICMP protocol instead of UDP. That brings us to the third latency test. About the only games to include support for latency logging are Source Engine titles, so we tested latency with a local server running Counterstrike: Source using the “net_graph 3” utility. We’re unable to capture raw latency numbers this way, but we could clearly see periods of higher latency on all of the Intel WiFi cards. If anything, the latency blips tended to occur more frequently with CS:S than in GaNE, though interestingly enough the Atheros controller basically tied the Bigfoot in our experience (1ms ping times most of the match, with no discernable spikes). The Realtek card had a higher average latency of around 6ms, but at least there wasn’t a lot of jitter.

In short, latency measurements with GaNE, ping, and CS:S all show similar results, at least when measured over the same local network. Despite concerns about using a Bigfoot provided utility, it doesn’t appear that Bigfoot is doing anything unusual. The bigger concern is what the results mean in the world of Internet gaming. Since we’re running on a local network, latency is already an order of magnitude (or two) lower than what you’d generally experience over the Internet. For online gaming, the latency results we’re reporting end up being more of an additive latency, after the latency of the router to Internet server is factored in. In other words, if you’re playing a game where it takes 60ms for your data packets to get from the server to your router, what GaNE reports is how long it takes those packets to get from the router to your PC. The bigger issue with latency isn’t the 3-5ms average that you’ll get over the local network, but it’s the jitter where 100+ms spikes occur, and as noted GaNE provides an indication of jitter with the “worst 10% average latency” result.

We ran each of the above tests at least five times at each test location on each laptop, and most of the tests were run dozens of times. The reason is that we were trying to measure best-case performance at each location, so we didn’t bother trying to average all of the results. We did notice a distinct lack of consistency across all wireless cards, especially on 2.4GHz connections, but as noted in the introduction, interference from other sources is nearly impossible to avoid. We report the best result for each test, which was never completely out of line with other “good” results. If we averaged the best five runs on each device, we’d be within 3% of our reported result, but we skipped that step in order to keep things manageable.

With the test parameters out of the way, let’s move to our first testing location and see how the various wireless devices perform.

Introducing Bigfoot’s Killer Wireless-N 1102 Netgear 2.4GHz Ideal Performance
Comments Locked


View All Comments

  • neothe0ne - Sunday, August 14, 2011 - link

    "And Dell, Asus, Acer, and Sony all do the same thing."

    Are you sure about that? I was under the impression HP and Lenovo were alone in the industry with the WLAN whitelist. And anyway, Dell does offer the Intel Centrino 6230 on the XPS 15 now, unlike HP's dv6 which is stuck in budget-tier Intel WiFi Link 1000 land.
  • cjl - Tuesday, August 16, 2011 - link

    Dell, at least in their Alienware products, definitely does not whitelist. After reading this article, I got one of the Killer 1102 cards for my M11xR2 (which comes with a rather terrible card by default, and there were no upgrade options offered), and it works just fine. I popped it in, installed the drivers, and everything has been working great since.
  • Musafir_86 - Thursday, August 11, 2011 - link


    -Thanks for the article, but did you tested those adapters with or without any security/encryption/password protection scheme? I mean WEP or WPA/WPA2 - I think encryption put some overhead in the throughput.

  • JarredWalton - Thursday, August 11, 2011 - link

    All testing was done with WPA2 AES. Most modern cards do fine with that, though a few years back it was sometimes slower IIRC.
  • Musafir_86 - Thursday, August 11, 2011 - link

    -Okay, thanks for the clarification. :)
  • Yummer72 - Thursday, August 11, 2011 - link

    Thanks for the informative review.

    I wonder if Bigfoot will continue to have an advantage if the "WLAN Optimizer" program was used with the other WiFi cards?


    I have personally seen significantly improved performance and the elimination of "lag spikes" (QuakeLive) with this software tweak.

    Any comments?
  • JarredWalton - Thursday, August 11, 2011 - link

    I'll give that a try; it could very well remove the spikes, leaving the primary advantage as the lower base latency.
  • bhima - Thursday, August 11, 2011 - link

    You should review that 95% color gamut matte screen in that Mythlogic ;)
  • loopingz - Thursday, August 11, 2011 - link

    First of all thanks for highlighting that I can change my wifi adaptator on my laptop. Mine is always frozing during transfert in windows (linux is fine).

    Second thanks for helping me choosing the good one.

    I hesitate now between intel 6300 for range, correct performance and price, and the 110 2/3 for pure performance.
    May be best of two worlds intel 6300 in the eeepc that travel a lot and bigfoot in the main home laptop.

    Can I recycle a my old wifi card or a new one using an antenna and puting it in my desktop computer?

    I will give try to Wlanoptimizer too because watching movie from the raid5 nas still not perfect (router linksys e3k).

    Thanks for the good job.
  • name99 - Thursday, August 11, 2011 - link

    "Wireless networking also tends to need more overhead for error checking and interference losses, and there’s a question of whether the streams are linearly independent enough to get higher throughput, orientation, directionality of signal, etc. Even though you might connect at 450Mbps or 300Mbps, you’ll never actually reach anywhere near that level of throughput. In our testing, the highest throughput we ever saw was around 75% utilization of the available bandwidth, and that was on a 300Mbps connection."

    This is not a useful description of the situation. The nominal speed of a connection (ie the MCS index) already includes error correction overhead --- that's why you see a range of bit-rates, with the same parameters (modulation, number of streams, bandwidth) --- these different bit-rates correspond to different levels of error correction, from the strongest (1/2 coding rate) to the weakest (5/6).

    It is also unlikely that corrupt packets and the retransmission (what you are calling "interference losses", though in your environment noise is likely more relevant than interference) are substantial --- both ends aggressively modify the MCS index to get the best throughput, and try to keep the number of corrupt packets low.

    The real issue is the MAC --- the negotiations over who next gets airtime. This used to be a big deal with wired ethernet as well, of course, but it went away with switches around the time we all moved to 100TX. The basic 802.11n MAC does not rely on any real co-ordination, just on timing windows and retries, and it wastes a phenomenal amount of time. 802.11e improves the situation somewhat (I expect all the systems that get 75% efficiency are using 802.11e, otherwise they'd see around 50% efficiency), but it's still not perfect.
    What one really wants is a central arbiter (like in a cell system) that hands out time slots, with very specific rules about who can talk when. For reasons I don't understand, 802.11 has been very resistant to adding such a MAC protocol (802.11e has elements of this, but does not go full-hog), but I would not be surprised if we finally see such as part of the 802.11n successor --- it's just such an obvious place to pick up some improvement. The real problem is that to do it right you have to give up backward compatibility, and no-one wants to do that. At least if we'd had it in 802.11n, then we'd be part way to a better world (people could switch it on once all their g equipment died, eg at home).

Log in

Don't have an account? Sign up now