netifd: change RPS/XPS handling to all CPUs and disable by default

The current implementation is significantly lowering lantiq
performace [1][2] by using RPS with non-irq CPUs and XPS
with alternating CPUs.

The previous netifd implementation (by default but could be
configured) simply used all CPUs and this patch essentially
reverts to this behaviour.

The only document suggesting using non-interrupt CPUs is Red
Hat [3] where if the network interrupt rate is extremely high
excluding the CPU that handles network interrupts *may* also
improve performance.

The original packet steering patches [4] advise that optimal
settings for the CPU mask seems to depend on architectures
and cache hierarcy so one size does not fit all. It also
advises that the overhead in processing for a lightly loaded
server can cause performance degradation.

Ideally, proper IRQ balancing is a better option with
the irqbalance daemon or manually.

The kernel does not enable packet steering by default, so
also disable in OpenWRT by default. (Though mvebu with its
hardware scheduling issues [5] might want to enable packet
steering by default.)

Change undocumented "default_ps" parameter to clearer
"packet_steering" parameter. The old parameter was only ever
set in target/linux/mediatek/base-files/etc/uci-defaults/99-net-ps
and matched the default.

[1] https://forum.openwrt.org/t/18-06-4-speed-fix-for-bt-homehub-5a
[2] https://openwrt.ebilan.co.uk/viewtopic.php?f=7&t=1105
[3] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rps
[4] https://marc.info/?l=linux-netdev&m=125792239522685&w=2
[5] https://git.openwrt.org/?p=openwrt/openwrt.git;a=commitdiff;h=2e1f6f1682d3974d8ea52310e460f1bbe470390f

Fixes: #1852
Fixes: #2573

Signed-off-by: Alan Swanson <reiver@improbability.net>
master
Alan Swanson 2019-11-15 14:05:35 +00:00 committed by Mathias Kresin
parent d83150e3fb
commit d3868f15f8
1 changed files with 4 additions and 9 deletions

View File

@ -34,8 +34,8 @@ set_hex_val() {
echo "$val" > "$file" echo "$val" > "$file"
} }
default_ps="$(uci get "network.@globals[0].default_ps")" packet_steering="$(uci get "network.@globals[0].packet_steering")"
[ -n "$default_ps" -a "$default_ps" != 1 ] && exit 0 [ "$packet_steering" != 1 ] && exit 0
exec 512>/var/lock/smp_tune.lock exec 512>/var/lock/smp_tune.lock
flock 512 || exit 1 flock 512 || exit 1
@ -53,15 +53,10 @@ for dev in /sys/class/net/*; do
irq_cpu_mask="$((1 << $irq_cpu))" irq_cpu_mask="$((1 << $irq_cpu))"
for q in ${dev}/queues/rx-*; do for q in ${dev}/queues/rx-*; do
set_hex_val "$q/rps_cpus" "$(($PROC_MASK & ~$irq_cpu_mask))" set_hex_val "$q/rps_cpus" "$PROC_MASK"
done done
ntxq="$(ls -d ${dev}/queues/tx-* | wc -l)"
idx=$(($irq_cpu + 1))
for q in ${dev}/queues/tx-*; do for q in ${dev}/queues/tx-*; do
set_hex_val "$q/xps_cpus" "$((1 << $idx))" set_hex_val "$q/xps_cpus" "$PROC_MASK"
let "idx = idx + 1"
[ "$idx" -ge "$NPROCS" ] && idx=0
done done
done done