At the moment, we are all spoilt for choice when it comes to Access Points & Routers. From low 10s of dollars to 1000+ we have access points, gateways, routers and extenders that address every niche in terms of capability and price. But finally, it comes down to the applications, scale, stability and fairness of the network established. From our experience helping our customers test Access Points (using our product SWAT WiCheck and with real devices in our Wireless Experience Lab), we have identified 9 tests that we think will give you a fair benchmark of how well an Access Point or Router will function in the real world. You can decide what weightage you need to give each parameter depending on your use case.
Disclaimer 1: I have bunched Access Points and Routers as Access Points or APs for ease of use. Please reach out to me if I am confusing you
Disclaimer 2: These are the tests, we, at Alethea believe best help characterize an AP or Router. There are many more potential candidates for test scenarios including support – for example beamforming, band steering, MU-MIMO etc. If you feel those make a material impact based on your experience, please comment below or contact us we will be happy to discuss
1. Throughput vs Load
WiFi Access Points are expected to serve a number of customers at the same time. With a single client, most access points will deliver the maximum throughput possible with that chipset / technology. However, as the number of users increase, the drop in throughput is noticeable. The difference between APs is apparent when throughput is run at different load levels. The load levels have to be chosen based on what is the scale expected. You would want to measure the degradation from the single client throughput to
# of clients | U/L UDP | D/L UDP | U/L TCP | DL/TCP |
1 | ||||
10 | ||||
30 | ||||
40 | ||||
60 |
- 10 clients (common in homes)
- 30 clients (the common design planning for offices)
- 40 clients (design number for larger offices)
- 60 clients (for public spaces)
You need to ensure that the throughput is not dropping precipitously as soon as the 5th or 10th client is added. A good Access Point would have taken into account the challenges of delivering predictable services to customers at different load levels.
2. Fairness vs Load
Airtime Fairness is when Access Points allocate equal air time to clients instead equal number of frames. This ensures that older and slower clients do not impede the overall performance of the network and newer faster clients can provide better service to customers. If there are clients connected from the edge of network coverage (very low RSSI translating to lower MCS rates), they would also impact network performance the same way as clients in legacy mode
Fairness defines how equitably clients are handled by the AP. For many applications, throughput is not the concern, fairness is. This would be our recommendation for Test #2 – verify fairness of throughput when different numbers of clients are on air. Test for same 802.11 mode clients as well as clients in mixed mode
Number of clients (11ac only) | Number of clients (mixed mode) |
2 x ac | 1 x ac + 1 x n |
4 x ac | 3 x ac + 1 x n |
16 x ac | 12 x ac + 4 x n |
40 x ac | 30 x ac + 10 x n |
30 x ac + 9 x n + 1 x a/b/g |
Test with clients at different RSSI levels
-40db | DL | -60dB | DL | -75db | DL |
1 | 0 | 0 | |||
1 | 1 | 0 | |||
1 | 1 | 1 | |||
1 | 0 | 1 | |||
8 | 0 | 0 | |||
8 | 1 | 0 | |||
8 | 0 | 1 | |||
16 | 0 | 0 | |||
16 | 1 | 0 | |||
16 | 0 | 1 |
Based on the results from these tests, prepare a standard deviation chart for throughput from clients running in the same mode. Tighter the deviation, better the fairness.
3. Voice calls over WiFi
In any office it is now standard for voice calls to run over WiFi. Even in homes, it is not unusual for bulk of voice conversations to use WiFi. Whatever be the app – facetime, whatsapp, google duo – the final arbitrator of quality is WiFi. So it is necessary to understand how well an Access Point can handle voice calls.
Easiest way to compare the APs is via comparative MOS (Mean Opinion Score) values at different loads. MOS value range from 1 – 5. Value of 1 indicates very bad voice quality, while 5 means perfect call quality. Any MOS value below 3 indicates an unacceptable quality level.
Number of calls | Avg. MOS Score | Number of calls with MOS > 3 |
2 | ||
4 | ||
8 | ||
16 | ||
40 |
For a consumer or home AP, if it can support even 4 calls, it would be sufficient. For an enterprise AP, the number of calls supported should be between 10 and 30+ – depending on the target market
4. Video Streaming Support
Video streaming is another scenario where the fairness of the Access Point comes into play – especially for high quality test vectors. Good video streaming performance requires two capabilities of an AP or Router to be well tuned
(i) The ability to provide clients fair access to the network
(ii) Prioritize video traffic over lower sensitivity traffic
We can verify the video performance of access points by streaming different quality / resolution / bitrate videos at scale and see how many streams complete without interruptions and bufferings. For this test, will use the following bitrate vectors for each resolution
- HD (1080P) Video: 4Mbps
- 2K (2160P) Video: 6Mbps
- 4K Video: 10Mbps
The test vectors will run for 10 minutes each
# of clients | HD
Interrupts |
HD Buffering | 2K Interrupts | 2K Buffering | 4K Interrupts | 4K Buffering |
1 | ||||||
10 | ||||||
30 | ||||||
40 | ||||||
60 |
Ideally there should not be any buffering or interruptions. But at larger numbers, that would not be the case. So we recommend the following pass/fail criteria
- An interruption is considered as material if the break in video is more than 1 second
- A video vector is considered as FAILED if any of the following happen:
- If more than 3 interruptions happen within any given 1 minute period
- More than 6 interrupts happen within the 10 minute test vector
- A single interruption happens for more than 3 seconds
- Total interruptions in the video is more than 8 seconds during a 10 minute period
5. Connection time vs Load
When a new user tries to connect to an Access Point, the expectation is that the onboarding process is quick. The ability to connect the clients with minimal delay is a key measure of the user experience.
Connection process involves 3 phases that the AP can influence user experience :
- Initial authentication and association messages with capabilities
- 4-way handshake including key exchange
- IP assignment from DHCP server
Together these elements determine the user experience joining the network. The ability of the network to process simultaneous requests is easily tested by verifying the connection time at different loads
Number of clients | Connection Time | IP Assignment Time |
1 | ||
8 | ||
32 | ||
64 | ||
128 |
6. Stability
The most obvious way customers perceive the quality of a network is by the stability of connections & service maintained. Whatever else is happening, the clients should not experience unusual drops in throughput / performance.
Unlike throughput or voice quality, there is no standard, universally accepted measure of the stability of an access point. We can adapt Mean Time Between Failures (MTBF) to measure stability under different loads
Number of clients | Traffic definition | MTBF |
8 | 2 clients x Video (60fps / 8000kbps / UHD)
2 clients x Voice calls 4 clients x Low throughput traffic (100 streams per clients) |
|
16 | 4 clients x Video (60fps / 8000kbps / UHD)
4 clients x Voice calls 8 clients x Low throughput traffic (100 streams per clients) |
|
30 | 5 clients x Video (60fps / 8000kbps / UHD)
5 clients x Voice calls 15 clients x Low throughput traffic (100 streams per clients) 5 clients x connect/disconnect cycle |
|
40 | 8 clients x Video (60fps / 8000kbps / UHD)
6 clients x Voice calls 16 clients x Low throughput traffic (100 streams per clients) 5 clients x connect/disconnect cycle 5 clients x negative connection scenario |
The APs with the highest MTBF would be the best. Again, please choose the number of clients that best match the scenarios you are trying to validate
7. Multi traffic – prioritization
Access Points are supposed to prioritize traffic depending on content. Voice has the highest priority, followed by video, then comes best effort tasks such as browsing and finally background tasks like HTTP downloads.
A good access point will follow this order of priority either by using its own Deep Packet Inspection algorithms or by checking the QoS bits embedded in each frame or via the WMM QoS support in the network. To validate this capability, we will try to see whether higher QoS traffic is impacted by the initiation of lower QoS traffic in the network
Number of clients | Traffic definition | MTBF |
4 | 1 client x Video (60fps / 8000kbps / UHD / 5 minutes) – T0 3 client x data (ftp DL / 10GB file) – T0 + 1 minute |
|
8 | 2 clients x Voice call / 10 minutes – T0
2 clients x Video (60fps / 8000kbps / UHD) – 6 minutes – T0 + 2 minutes 4 clients x data (ftp DL / 10GB file) – T0 + 4 minutes |
|
18 | 3 clients x Voice call / 10 minutes – T0
3 clients x Video (60fps / 8000kbps / UHD) – 6 minutes – T0 + 2 minutes 6 clients x data (ftp DL / 10GB file) – T0 + 4 minutes 6 clients x browsing (Top 20 sites video streaming sites) – T0 + 3 minutes |
|
40 | 5 clients x Voice call / 10 minutes – T0
5 clients x Video (60fps / 8000kbps / UHD) – 6 minutes – T0 + 2 minutes 15 clients x data (ftp DL / 10GB file) – T0 + 4 minutes 15 clients x browsing (Top 20 sites video streaming sites) – T0 + 3 minutes |
The comparative throughput of each type of traffic should be recorded to compare if the start of end of a lower QoS traffic type is affecting the priority of a higher QoS traffic type
8. Rate vs Range
In a house, unlike in an office, the coverage of an Access Point has a critical impact on the user experience. Any location in an office will be covered by multiple Access Points. That is not the case in a house where a single AP is the default even today. To measure coverage, we see how the throughput varies with different RSSI.
RSSI | Number of clients | Throughput |
-40dB | 1 | |
16 | ||
32 | ||
-50dB | 1 | |
16 | ||
32 | ||
-60dB | 1 | |
16 | ||
32 | ||
-70dB | 1 | |
16 | ||
32 |
9. NAT throughput
For all the talk of Fiber to the Home (FTTH), the available bandwidth is defined by the NAT throughput – i.e. the part of the total available bandwidth that can be accessed by clients inside the routed network. If the use case also involves significant amount of upload traffic (for example, streaming gameplay overt twitch), the router has to be tested for WAN to LAN as well as LAN to WAN throughput.
The approach is very similar to what you did in Test case #1 – Throughput vs. Load. We will add one additional scenario by running the connection over a VPN connection to see how much throughput drops under this scenario
# of clients | U/L UDP | D/L UDP | U/L TCP | DL/TCP |
1 | ||||
16 | ||||
32 |
In the next test, run connections over a VPN connection (most routers would support Open VPN)
# of clients | U/L UDP | D/L UDP | U/L TCP | DL/TCP |
1 | ||||
16 | ||||
32 |
Conclusion
At Alethea, we believe all these tests help you to analyze the overall performance of an access point. Many of these tests will require you to have a large test set up with 10s of real devices with a good automation framework or access to client simulators. Test cases are easiest to run with WiFi Client Emulator. They provide repeatability, comprehensive control, ease of automation and excellent stability – capabilities difficult to find with real clients.
If you want us to help in your testing process, please contact us or mail us your queries at info@alethea.in
P.S: Alethea Blog has been selected as one of the Top 25 Wireless Technology Blogs on the web