Browse Source

speedtest-netperf: new package to measure network performance

The speedtest-netperf.sh script measures the network throughput while
monitoring latency under load and capturing key CPU usage and frequency
statistics. The script can emulate a web-based speed test by downloading
and then uploading from an internet server, or perform simultaneous
download and upload to mimic the stress of the FLENT test program.

It simplifies tasks such as validating ISP provisioned speeds or setting
up and fine-tuning SQM, directly on the router. The CPU usage details
can also help determine if the demands of SQM, routing and other tasks
such as the test itself are exhausting the device's CPUs.

This script leverages earlier scripts from the CeroWrt project used for
bufferbloat mitigation, betterspeedtest.sh and netperfrunner.sh. They are
used with the permission of the author, Rich Brown.

Signed-off-by: Tony Ambardar <itugrok@yahoo.com>
lilik-openwrt-22.03
Tony Ambardar 6 years ago
committed by guidosarducci
parent
commit
463590e2bc
3 changed files with 584 additions and 0 deletions
  1. +45
    -0
      net/speedtest-netperf/Makefile
  2. +131
    -0
      net/speedtest-netperf/files/README.md
  3. +408
    -0
      net/speedtest-netperf/files/speedtest-netperf.sh

+ 45
- 0
net/speedtest-netperf/Makefile View File

@ -0,0 +1,45 @@
#
# Copyright (c) 2018 Tony Ambardar
# This is free software, licensed under the GNU General Public License v2.
#
include $(TOPDIR)/rules.mk
PKG_NAME:=speedtest-netperf
PKG_VERSION:=1.0.0
PKG_RELEASE:=1
PKG_LICENSE:=GPL-2.0
PKG_MAINTAINER:=Tony Ambardar <itugrok@yahoo.com>
include $(INCLUDE_DIR)/package.mk
define Package/speedtest-netperf
SECTION:=net
CATEGORY:=Network
TITLE:=Script to measure the performance of your network and router
DEPENDS:=+netperf
CONFLICTS:=speedtest
PKGARCH:=all
endef
define Package/speedtest-netperf/description
Script to measure the performance of your network and router.
Please see https://github.com/openwrt/packages/blob/master/net/speedtest-netperf/files/README.md for further information.
endef
define Build/Prepare
endef
define Build/Configure
endef
define Build/Compile
endef
define Package/speedtest-netperf/install
$(INSTALL_DIR) $(1)/usr/bin
$(INSTALL_BIN) ./files/speedtest-netperf.sh $(1)/usr/bin/
endef
$(eval $(call BuildPackage,speedtest-netperf))

+ 131
- 0
net/speedtest-netperf/files/README.md View File

@ -0,0 +1,131 @@
Network Performance Testing
===========================
## Introduction
The `speedtest-netperf` package provides a convenient means of on-device network performance testing for OpenWrt routers. Such performance testing primarily includes characterizing the network throughput and latency, but CPU usage can also be an important secondary measurement. These aspects of network testing are motivated chiefly by the following:
1. **Throughput:** Network speed measurements can help troubleshoot transfer problems, and be used to determine the truth of an ISP's promised speed claims. Accurate throughput numbers also provide guidance for configuring other software's settings, such as SQM ingress/egress rates, or bandwidth limits for Bittorrent.
2. **Latency:** Network latency is a key factor in high-quality experiences with real-time or interactive applications such as VOIP, gaming, or video conferencing, and excessive latency can lead to undesirable dropouts, freezes and lag. Such latency problems are endemic on the Internet and often the result of [bufferbloat](https://www.bufferbloat.net/projects/). Systematic latency measurements are an important part of identifying and mitigating this bufferbloat.
3. **CPU Usage:** Observing CPU usage under network load gives insight into whether the router is CPU-bound, or if there is CPU "headroom" to support even higher network throughput. In addition to managing network traffic, a router actively running a speed test will also use CPU cycles to generate network load, and measuring this distinct CPU usage also helps gauge its impact.
**Note:** _The `speedtest-netperf.sh` script uses servers and network bandwidth that are provided by generous volunteers (not some wealthy "big company"). Feel free to use the script to test your SQM configuration or troubleshoot network and latency problems. Continuous or high rate use of this script may result in denied access. Happy testing!_
## Theory of Operation
When launched, `speedtest-netperf.sh` uses the local `netperf` application to run several upload and download streams (files) to a server on the Internet. This places a heavy load on the bottleneck link of your network (probably your connection to the Internet) while measuring the total bandwidth of the link during the transfers. Under this network load, the script simultaneously measures the latency of pings to see whether the file transfers affect the responsiveness of your network. Additionally, the script tracks the per-CPU processor usage, as well as the CPU usage of the `netperf` instances used for the test. On systems that report CPU frequency scaling, the script can also report per-CPU frequencies.
The script operates in two distict modes for network loading: *sequential* and *concurrent*. In the default sequential mode, the script emulates a web-based speed test by first downloading and then uploading network streams. In concurrent mode, the script mimics the stress test of the [FLENT](https://github.com/tohojo/flent) program by dowloading and uploading streams simultaneously.
Sequential mode is preferred when measuring peak upload and download speeds for SQM configuration or testing ISP speed claims, because the measurements are unimpacted by traffic in the opposite direction.
Concurrent mode places greater stress on the network, and can expose additional latency problems. It provides a more realistic estimate of expected bidirectional throughput. However, the download and upload speeds reported may be considerably lower than your line's rated speed. This is not a bug, nor is it a problem with your internet connection. It's because the ACK (acknowledge) messages sent back to the sender may consume a significant fraction of a link's capacity (as much as 50% with highly asymmetric links, e.g 15:1 or 20:1).
After running `speedtest-netperf.sh`, if latency is seen to increase much during the data transfers, then other network activity, such as voice or video chat, gaming, and general interactive usage will likely suffer. Gamers will see this as frustrating lag when someone else uses the network, Skype and FaceTime users will see dropouts or freezes, and VOIP service may be unusable.
## Installation
This package and its dependencies should be installed from the official OpenWrt software repository with the command:
`opkg install speedtest-netperf`
If unavailable, search for and try to directly download the same package for a newer OpenWrt release, since it is architecture-independent and very portable.
As a last resort, you may download and install the latest version directly from the author's personal repo: e.g.
```
cd /tmp
uclient-fetch https://github.com/guidosarducci/papal-repo/raw/master/speedtest-netperf_1.0.0-1_all.ipk
opkg install speedtest-netperf_1.0.0-1_all.ipk
```
## Usage
The speedtest-netperf.sh script measures throughput, latency and CPU usage during file transfers. To invoke it:
speedtest-netperf.sh [-4 | -6] [-H netperf-server] [-t duration] [-p host-to-ping] [-n simultaneous-streams ] [-s | -c]
Options, if present, are:
-4 | -6: Enable ipv4 or ipv6 testing (default - ipv4)
-H | --host: DNS or Address of a netperf server (default - netperf.bufferbloat.net)
Alternate servers are netperf-east (US, east coast),
netperf-west (US, California), and netperf-eu (Denmark).
-t | --time: Duration for how long each direction's test should run - (default - 60 seconds)
-p | --ping: Host to ping to measure latency (default - gstatic.com)
-n | --number: Number of simultaneous sessions (default - 5 sessions)
-s | --sequential: Sequential download/upload (default - sequential)
-c | --concurrent: Concurrent download/upload
The primary script output shows download and upload speeds, together with the percent packet loss, and a summary of latencies, including min, max, average, median, and 10th and 90th percentiles so you can get a sense of the distribution.
The tool also summarizes CPU usage statistics during the test, to highlight whether speeds may be CPU-bound during testing, and to provide a better sense of how much CPU "headroom" would be available during normal operation. The data includes per-CPU load and frequency (if supported), and CPU usage of the `netperf` test programs.
### Examples
Below is a comparison of sequential speed testing runs showing the benefits of SQM. On the left is a test without SQM. Note that the latency gets large (greater than half a second), meaning that network performance would be poor for anyone else using the network. On the right is a test using SQM: the latency goes up a little (less than 21 msec under load), and network performance remains good.
Notice also that the activation of SQM requires greater CPU, but that in both cases the router is not CPU-bound and likely capable of supporting higher throughputs.
```
[Sequential Test: NO SQM, POOR LATENCY] [Sequential Test: WITH SQM, GOOD LATENCY]
# speedtest-netperf.sh # speedtest-netperf.sh
[date/time] Starting speedtest for 60 seconds per transfer [date/time] Starting speedtest for 60 seconds per transfer
session. Measure speed to netperf.bufferbloat.net (IPv4) session. Measure speed to netperf.bufferbloat.net (IPv4)
while pinging gstatic.com. Download and upload sessions are while pinging gstatic.com. Download and upload sessions are
sequential, each with 5 simultaneous streams. sequential, each with 5 simultaneous streams.
Download: 35.40 Mbps Download: 32.69 Mbps
Latency: (in msec, 61 pings, 0.00% packet loss) Latency: (in msec, 61 pings, 0.00% packet loss)
Min: 10.228 Min: 9.388
10pct: 38.864 10pct: 12.038
Median: 47.027 Median: 14.550
Avg: 45.953 Avg: 14.827
90pct: 51.867 90pct: 17.122
Max: 56.758 Max: 20.558
Processor: (in % busy, avg +/- stddev, 57 samples) Processor: (in % busy, avg +/- stddev, 55 samples)
cpu0: 56 +/- 6 cpu0: 82 +/- 5
Overhead: (in % total CPU used) Overhead: (in % total CPU used)
netperf: 34 netperf: 51
Upload: 5.38 Mbps Upload: 5.16 Mbps
Latency: (in msec, 62 pings, 0.00% packet loss) Latency: (in msec, 62 pings, 0.00% packet loss)
Min: 11.581 Min: 9.153
10pct: 424.616 10pct: 10.401
Median: 504.339 Median: 14.151
Avg: 491.511 Avg: 14.056
90pct: 561.466 90pct: 17.241
Max: 580.896 Max: 20.733
Processor: (in % busy, avg +/- stddev, 60 samples) Processor: (in % busy, avg +/- stddev, 59 samples)
cpu0: 11 +/- 5 cpu0: 16 +/- 5
Overhead: (in % total CPU used) Overhead: (in % total CPU used)
netperf: 1 netperf: 1
```
Below is another comparison of SQM, but now using a concurrent speedtest. Notice that without SQM, the total throughput drops nearly 11 Mbps compared to the above sequential test without SQM. This is due to both poorer latencies and the consumption of bandwidth by ACK messages. As before, the use of SQM on the right not only yields a marked improvement in latencies, but also recovers almost 6 Mbps in throughput (with SQM using CAKE's ACK filtering).
```
[Concurrent Test: NO SQM, POOR LATENCY] [Concurrent Test: WITH SQM, GOOD LATENCY]
# speedtest-netperf.sh --concurrent # speedtest-netperf.sh --concurrent
[date/time] Starting speedtest for 60 seconds per transfer [date/time] Starting speedtest for 60 seconds per transfer
session. Measure speed to netperf.bufferbloat.net (IPv4) session. Measure speed to netperf.bufferbloat.net (IPv4)
while pinging gstatic.com. Download and upload sessions are while pinging gstatic.com. Download and upload sessions are
concurrent, each with 5 simultaneous streams. concurrent, each with 5 simultaneous streams.
Download: 25.24 Mbps Download: 31.92 Mbps
Upload: 4.75 Mbps Upload: 4.41 Mbps
Latency: (in msec, 59 pings, 0.00% packet loss) Latency: (in msec, 61 pings, 0.00% packet loss)
Min: 9.401 Min: 10.244
10pct: 129.593 10pct: 13.161
Median: 189.312 Median: 16.885
Avg: 195.418 Avg: 17.219
90pct: 226.628 90pct: 21.166
Max: 416.665 Max: 28.224
Processor: (in % busy, avg +/- stddev, 59 samples) Processor: (in % busy, avg +/- stddev, 56 samples)
cpu0: 45 +/- 12 cpu0: 86 +/- 4
Overhead: (in % total CPU used) Overhead: (in % total CPU used)
netperf: 25 netperf: 42
```
## Provenance
The `speedtest-netperf.sh` utility leverages earlier scripts from the CeroWrt project used to measure network throughput and latency: [betterspeedtest.sh](https://github.com/richb-hanover/OpenWrtScripts#betterspeedtestsh) and [netperfrunner.sh](https://github.com/richb-hanover/OpenWrtScripts#netperfrunnersh). Both scripts are gratefully used with the permission of their author, [Rich Brown](https://github.com/richb-hanover/OpenWrtScripts).

+ 408
- 0
net/speedtest-netperf/files/speedtest-netperf.sh View File

@ -0,0 +1,408 @@
#!/bin/sh
# This speed testing script provides a convenient means of on-device network
# performance testing for OpenWrt routers, and subsumes functionality of the
# earlier CeroWrt scripts betterspeedtest.sh and netperfrunner.sh written by
# Rich Brown.
#
# When launched, the script uses netperf to run several upload and download
# streams to an Internet server. This places heavy load on the bottleneck link
# of your network (probably your Internet connection) while measuring the total
# bandwidth of the link during the transfers. Under this network load, the
# script simultaneously measures the latency of pings to see whether the file
# transfers affect the responsiveness of your network. Additionally, the script
# tracks the per-CPU processor usage, as well as the netperf CPU usage used for
# the test. On systems that report CPU frequency scaling, the script can also
# report per-CPU frequencies.
#
# The script operates in two modes of network loading: sequential and
# concurrent. The default sequential mode emulates a web-based speed test by
# first downloading and then uploading network streams, while concurrent mode
# provides a stress test by dowloading and uploading streams simultaneously.
#
# NOTE: The script uses servers and network bandwidth that are provided by
# generous volunteers (not some wealthy "big company"). Feel free to use the
# script to test your SQM configuration or troubleshoot network and latency
# problems. Continuous or high rate use of this script may result in denied
# access. Happy testing!
#
# For more information, consult the online README.md:
# https://github.com/openwrt/packages/blob/master/net/speedtest-netperf/files/README.md
# Usage: speedtest-netperf.sh [-4 | -6] [ -H netperf-server ] [ -t duration ] [ -p host-to-ping ] [ -n simultaneous-streams ] [ -s | -c ]
# Options: If options are present:
#
# -H | --host: netperf server name or IP (default netperf.bufferbloat.net)
# Alternate servers are netperf-east (east coast US),
# netperf-west (California), and netperf-eu (Denmark)
# -4 | -6: Enable ipv4 or ipv6 testing (ipv4 is the default)
# -t | --time: Duration of each direction's test - (default - 60 seconds)
# -p | --ping: Host to ping to measure latency (default - gstatic.com)
# -n | --number: Number of simultaneous sessions (default - 5 sessions)
# based on whether concurrent or sequential upload/downloads)
# -s | -c: Sequential or concurrent download/upload (default - sequential)
# Copyright (c) 2014 - Rich Brown <rich.brown@blueberryhillsoftware.com>
# Copyright (c) 2018 - Tony Ambardar <itugrok@yahoo.com>
# GPLv2
# Summarize contents of the ping's output file as min, avg, median, max, etc.
# input parameter ($1) file contains the output of the ping command
summarize_pings() {
# Process the ping times, and summarize the results
# grep to keep lines with "time=", and sed to isolate time stamps and sort them
# awk builds an array of those values, prints first & last (which are min, max)
# and computes average.
# If the number of samples is >= 10, also computes median, and 10th and 90th
# percentile readings.
sed 's/^.*time=\([^ ]*\) ms/\1 pingtime/' < $1 | grep -v "PING" | sort -n | awk '
BEGIN {numdrops=0; numrows=0;}
{
if ( $2 == "pingtime" ) {
numrows += 1;
arr[numrows]=$1; sum+=$1;
} else {
numdrops += 1;
}
}
END {
pc10="-"; pc90="-"; med="-";
if (numrows>=10) {
ix=int(numrows/10); pc10=arr[ix]; ix=int(numrows*9/10);pc90=arr[ix];
if (numrows%2==1) med=arr[(numrows+1)/2]; else med=(arr[numrows/2]);
}
pktloss = numdrops>0 ? numdrops/(numdrops+numrows) * 100 : 0;
printf(" Latency: [in msec, %d pings, %4.2f%% packet loss]\n",numdrops+numrows,pktloss)
if (numrows>0) {
fmt="%9s: %7.3f\n"
printf(fmt fmt fmt fmt fmt fmt, "Min",arr[1],"10pct",pc10,"Median",med,
"Avg",sum/numrows,"90pct",pc90,"Max",arr[numrows])
}
}'
}
# Summarize the contents of the load file, speedtest process stat file, cpuinfo
# file to show mean/stddev CPU utilization, CPU freq, netperf CPU usage.
# input parameter ($1) file contains CPU load/frequency samples
summarize_load() {
cat $1 /proc/$$/stat | awk -v SCRIPT_PID=$$ '
# track CPU frequencies
$1 == "cpufreq" {
sum_freq[$2]+=$3/1000
n_freq_samp[$2]++
}
# total CPU of speedtest processes
$1 == SCRIPT_PID {
tot=$16+$17
if (init_proc_cpu=="") init_proc_cpu=tot
proc_cpu=tot-init_proc_cpu
}
# track aggregate CPU stats
$1 == "cpu" {
tot=0; for (f=2;f<=8;f++) tot+=$f
if (init_cpu=="") init_cpu=tot
tot_cpu=tot-init_cpu
n_load_samp++
}
# track per-CPU stats
$1 ~ /cpu[0-9]+/ {
tot=0; for (f=2;f<=8;f++) tot+=$f
usg=tot-($5+$6)
if (init_tot[$1]=="") {
init_tot[$1]=tot
init_usg[$1]=usg
cpus[n_cpus++]=$1
}
if (last_tot[$1]>0) {
sum_usg_2[$1] += ((usg-last_usg[$1])/(tot-last_tot[$1]))^2
}
last_tot[$1]=tot
last_usg[$1]=usg
}
END {
printf(" CPU Load: [in %% busy (avg +/- std dev)")
for (i in sum_freq) if (sum_freq[i]>0) {printf(" @ avg frequency"); break}
if (n_load_samp>0) n_load_samp--
printf(", %d samples]\n", n_load_samp)
for (i=0;i<n_cpus;i++) {
c=cpus[i]
if (n_load_samp>0) {
avg_usg=(last_tot[c]-init_tot[c])
avg_usg=avg_usg>0 ? (last_usg[c]-init_usg[c])/avg_usg : 0
std_usg=sum_usg_2[c]/n_load_samp-avg_usg^2
std_usg=std_usg>0 ? sqrt(std_usg) : 0
printf("%9s: %5.1f +/- %4.1f", c, avg_usg*100, std_usg*100)
avg_freq=n_freq_samp[c]>0 ? sum_freq[c]/n_freq_samp[c] : 0
if (avg_freq>0) printf(" @ %4d MHz", avg_freq)
printf("\n")
}
}
printf(" Overhead: [in %% used of total CPU available]\n")
printf("%9s: %5.1f\n", "netperf", tot_cpu>0 ? proc_cpu/tot_cpu*100 : 0)
}'
}
# Summarize the contents of the speed file to show formatted transfer rate.
# input parameter ($1) indicates transfer direction
# input parameter ($2) file contains speed info from netperf
summarize_speed() {
printf "%9s: %6.2f Mbps\n" $1 $(awk '{s+=$1} END {print s}' $2)
}
# Capture process load, then per-CPU load/frequency info at 1-second intervals.
sample_load() {
local cpus="$(find /sys/devices/system/cpu -name 'cpu[0-9]*' 2>/dev/null)"
local f="cpufreq/scaling_cur_freq"
cat /proc/$$/stat
while : ; do
sleep 1s
egrep "^cpu[0-9]*" /proc/stat
for c in $cpus; do
[ -r $c/$f ] && echo "cpufreq $(basename $c) $(cat $c/$f)"
done
done
}
# Print a line of dots as a progress indicator.
print_dots() {
while : ; do
printf "."
sleep 1s
done
}
# Start $MAXSESSIONS datastreams between netperf client and server
# netperf writes the sole output value (in Mbps) to stdout when completed
start_netperf() {
for i in $( seq $MAXSESSIONS ); do
netperf $TESTPROTO -H $TESTHOST -t $1 -l $TESTDUR -v 0 -P 0 >> $2 &
# echo "Starting PID $! params: $TESTPROTO -H $TESTHOST -t $1 -l $TESTDUR -v 0 -P 0 >> $2"
done
}
# Wait until each of the background netperf processes completes
wait_netperf() {
# gets a list of PIDs for child processes named 'netperf'
# echo "Process is $$"
# echo $(pgrep -P $$ netperf)
local err=0
for i in $(pgrep -P $$ netperf); do
# echo "Waiting for $i"
wait $i || err=1
done
return $err
}
# Stop the background netperf processes
kill_netperf() {
# gets a list of PIDs for child processes named 'netperf'
# echo "Process is $$"
# echo $(pgrep -P $$ netperf)
for i in $(pgrep -P $$ netperf); do
# echo "Stopping $i"
kill -9 $i
wait $i 2>/dev/null
done
}
# Stop the current sample_load() process
kill_load() {
# echo "Load: $LOAD_PID"
kill -9 $LOAD_PID
wait $LOAD_PID 2>/dev/null
LOAD_PID=0
}
# Stop the current print_dots() process
kill_dots() {
# echo "Dots: $DOTS_PID"
kill -9 $DOTS_PID
wait $DOTS_PID 2>/dev/null
DOTS_PID=0
}
# Stop the current ping process
kill_pings() {
# echo "Pings: $PING_PID"
kill -9 $PING_PID
wait $PING_PID 2>/dev/null
PING_PID=0
}
# Stop the current load, pings and dots, and exit
# ping command catches and handles first Ctrl-C, so you have to hit it again...
kill_background_and_exit() {
kill_netperf
kill_load
kill_dots
rm -f $DLFILE
rm -f $ULFILE
rm -f $LOADFILE
rm -f $PINGFILE
echo; echo "Stopped"
exit 1
}
# Measure speed, ping latency and cpu usage of netperf data transfers
# Called with direction parameter: "Download", "Upload", or "Bidirectional"
# The function gets other info from globals and command-line arguments.
measure_direction() {
# Create temp files for netperf up/download results
ULFILE=$(mktemp /tmp/netperfUL.XXXXXX) || exit 1
DLFILE=$(mktemp /tmp/netperfDL.XXXXXX) || exit 1
PINGFILE=$(mktemp /tmp/measurepings.XXXXXX) || exit 1
LOADFILE=$(mktemp /tmp/measureload.XXXXXX) || exit 1
# echo $ULFILE $DLFILE $PINGFILE $LOADFILE
local dir=$1
local spd_test
# Start dots
print_dots &
DOTS_PID=$!
# echo "Dots PID: $DOTS_PID"
# Start Ping
if [ $TESTPROTO -eq "-4" ]; then
ping $PINGHOST > $PINGFILE &
else
ping6 $PINGHOST > $PINGFILE &
fi
PING_PID=$!
# echo "Ping PID: $PING_PID"
# Start CPU load sampling
sample_load > $LOADFILE &
LOAD_PID=$!
# echo "Load PID: $LOAD_PID"
# Start netperf datastreams between client and server
if [ $dir = "Bidirectional" ]; then
start_netperf TCP_STREAM $ULFILE
start_netperf TCP_MAERTS $DLFILE
else
# Start unidirectional netperf with the proper direction
case $dir in
Download) spd_test="TCP_MAERTS";;
Upload) spd_test="TCP_STREAM";;
esac
start_netperf $spd_test $DLFILE
fi
# Wait until background netperf processes complete, check errors
if ! wait_netperf; then
echo;echo "WARNING: netperf returned errors. Results may be inaccurate!"
fi
# When netperf completes, stop the CPU monitor, dots and pings
kill_load
kill_pings
kill_dots
echo
# Print TCP Download/Upload speed
if [ $dir = "Bidirectional" ]; then
summarize_speed Download $DLFILE
summarize_speed Upload $ULFILE
else
summarize_speed $dir $DLFILE
fi
# Summarize the ping data
summarize_pings $PINGFILE
# Summarize the load data
summarize_load $LOADFILE
# Clean up
rm -f $DLFILE
rm -f $ULFILE
rm -f $PINGFILE
rm -f $LOADFILE
}
# ------- Start of the main routine --------
# set an initial values for defaults
TESTHOST="netperf.bufferbloat.net"
TESTDUR="60"
PINGHOST="gstatic.com"
MAXSESSIONS=5
TESTPROTO="-4"
TESTSEQ=1
# read the options
# extract options and their arguments into variables.
while [ $# -gt 0 ]
do
case "$1" in
-s|--sequential) TESTSEQ=1 ; shift 1 ;;
-c|--concurrent) TESTSEQ=0 ; shift 1 ;;
-4|-6) TESTPROTO=$1 ; shift 1 ;;
-H|--host)
case "$2" in
"") echo "Missing hostname" ; exit 1 ;;
*) TESTHOST=$2 ; shift 2 ;;
esac ;;
-t|--time)
case "$2" in
"") echo "Missing duration" ; exit 1 ;;
*) TESTDUR=$2 ; shift 2 ;;
esac ;;
-p|--ping)
case "$2" in
"") echo "Missing ping host" ; exit 1 ;;
*) PINGHOST=$2 ; shift 2 ;;
esac ;;
-n|--number)
case "$2" in
"") echo "Missing number of simultaneous streams" ; exit 1 ;;
*) MAXSESSIONS=$2 ; shift 2 ;;
esac ;;
--) shift ; break ;;
*) echo "Usage: speedtest-netperf.sh [ -s | -c ] [-4 | -6] [ -H netperf-server ] [ -t duration ] [ -p host-to-ping ] [ -n simultaneous-sessions ]" ; exit 1 ;;
esac
done
# Check dependencies
if ! netperf -V >/dev/null 2>&1; then
echo "Missing netperf program, please install" ; exit 1
fi
# Start the main test
DATE=$(date "+%Y-%m-%d %H:%M:%S")
echo "$DATE Starting speedtest for $TESTDUR seconds per transfer session."
echo "Measure speed to $TESTHOST (IPv${TESTPROTO#-}) while pinging $PINGHOST."
echo -n "Download and upload sessions are "
[ "$TESTSEQ " -eq "1" ] && echo -n "sequential," || echo -n "concurrent,"
echo " each with $MAXSESSIONS simultaneous streams."
# Catch a Ctl-C and stop background netperf, CPU stats, pinging and print_dots
trap kill_background_and_exit HUP INT TERM
if [ $TESTSEQ -eq "1" ]; then
measure_direction "Download"
measure_direction "Upload"
else
measure_direction "Bidirectional"
fi

Loading…
Cancel
Save