[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20180821113923.15b5c52e@xeon-e3>
Date: Tue, 21 Aug 2018 11:39:23 -0700
From: Stephen Hemminger <stephen@...workplumber.org>
To: netdev@...r.kernel.org
Subject: Fw: [Bug 200879] New: Poor network performance using CX-5 Mellanox
card
Begin forwarded message:
Date: Tue, 21 Aug 2018 18:37:01 +0000
From: bugzilla-daemon@...zilla.kernel.org
To: stephen@...workplumber.org
Subject: [Bug 200879] New: Poor network performance using CX-5 Mellanox card
https://bugzilla.kernel.org/show_bug.cgi?id=200879
Bug ID: 200879
Summary: Poor network performance using CX-5 Mellanox card
Product: Networking
Version: 2.5
Kernel Version: 4.18
Hardware: Intel
OS: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: IPV4
Assignee: stephen@...workplumber.org
Reporter: kolga@...app.com
Regression: No
I'm having issues with the latest kernel (4.18) and using Mellanox CX-5 cards.
Tested using a direct connection between the machines and via a 40G link. There
is an asymmetric throughput.
I have tested some kernels in between 4.15-rc4 and 4.18. What I notice is that
I have asymmetric flow performance where "good" direction gets 20+G and the bad
direction gets 2+G. I measure performance over doing multiple runs (10). I have
noticed that while 4.15-rc4 was not perfect at getting symmetric 20+G
performance all the time (details below), the poor performance is more
prevalent starting from 4.16 kernel.
In 4.15-rc4 6 out of 10 runs show good performance 20+G (in the bad
direction). performance in other direction is mostly 28+G (7 out of 10
runs. where 3 runs it goes down to 15+G)
In 4.16 3 out of 10 runs show good performance 20+G. performance in
other direction is mostly 20+G (7 out of 10 runs where 3 runs goes
down to 15G)
In 4.17 0 out of 10 runs show good performance. performance in other
direction is mostly 20+G (7 out of 10 runs where 3 runs it goes down
to 5G)
In 4.18 0 out of 10 runs show good performance in the "bad
direction". the "good direction" is now also pretty bad where 7 out of
10 runs had throughput between 7-10G and 3 runs had 11-17G.
Let me try to do with a description of the setup. I have 2machines:
sti-rx200-231 and sti-rx200-232. I have been running iperf3 testing
bandwidth where at first the server was on -231 and then when the
server was on -232. The "bad" direction is when -232 is the server and
-231 is the client.
> can you share some configuration details:
> CPU numa and affinity details:
[kolga@...-rx200-232 ~]$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 32155 MB
node 0 free: 30394 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 32231 MB
node 1 free: 30486 MB
node distances:
node 0 1
0: 10 20
1: 20 10
[kolga@...-rx200-231 ~]$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 32161 MB
node 0 free: 31100 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 32231 MB
node 1 free: 29774 MB
node distances:
node 0 1
0: 10 20
1: 20 10
> ethtool -l
[kolga@...-rx200-231 ~]$ sudo ethtool -l enp4s0
Channel parameters for enp4s0:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 24
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 24
(same for the other machine)
> ethtool -g
[kolga@...-rx200-231 ~]$ sudo ethtool -g enp4s0
Ring parameters for enp4s0:
Pre-set maximums:
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
Current hardware settings:
RX: 1024
RX Mini: 0
RX Jumbo: 0
TX: 1024
(same for the other machine)
> ethtool -x
[kolga@...-rx200-231 ~]$ sudo ethtool -x enp4s0
RX flow hash indirection table for enp4s0 with 24 RX ring(s):
0: 0 1 2 3 4 5 6 7
8: 8 9 10 11 12 13 14 15
16: 16 17 18 19 20 21 22 23
24: 0 1 2 3 4 5 6 7
32: 8 9 10 11 12 13 14 15
40: 16 17 18 19 20 21 22 23
48: 0 1 2 3 4 5 6 7
56: 8 9 10 11 12 13 14 15
64: 16 17 18 19 20 21 22 23
72: 0 1 2 3 4 5 6 7
80: 8 9 10 11 12 13 14 15
88: 16 17 18 19 20 21 22 23
96: 0 1 2 3 4 5 6 7
104: 8 9 10 11 12 13 14 15
112: 16 17 18 19 20 21 22 23
120: 0 1 2 3 4 5 6 7
RSS hash key:
5e:1c:93:e2:ec:b6:44:b1:e4:ec:b1:20:57:ab:90:f6:0c:1a:46:13:b8:19:66:c8:56:0c:06:b2:d5:53:a6:4d:89:6b:0b:b1:d4:30:90:31
(same for the other machine but the RSS hash key is different)
> ethtool -k
[kolga@...-rx200-231 ~]$ sudo ethtool -k enp4s0
Features for enp4s0:
Cannot get device udp-fragmentation-offload settings: Operation not supported
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: off
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: on [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
(i think it's the same for the other machine but just in case here's the
output)
[kolga@...-rx200-232 ~]$ sudo ethtool -k enp4s0
Features for enp4s0:
Cannot get device udp-fragmentation-offload settings: Operation not supported
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
highdma: on [fixed]
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-gso-partial: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off
rx-all: off
tx-vlan-stag-hw-insert: on
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: on [fixed]
l2-fwd-offload: off [fixed]
hw-tc-offload: off
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: on
tls-hw-tx-offload: off [fixed]
rx-gro-hw: off [fixed]
tls-hw-record: off [fixed]
> ethtool --show-priv-flags
[kolga@...-rx200-231 ~]$ sudo ethtool --show-priv-flags enp4s0
Private flags for enp4s0:
rx_cqe_moder : on
tx_cqe_moder : off
rx_cqe_compress: off
rx_striding_rq : on
(same for the other machine)
> ethool -S //before and after the good and bad runs"
> perf report/top while running the test.
This is a run where -232 is a server and -231 is a client.
[kolga@...-rx200-232 src]$ ./iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 172.20.35.189, port 37302
[ 5] local 172.20.35.191 port 5201 connected to 172.20.35.189 port 37304
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 236 MBytes 1.98 Gbits/sec
[ 5] 1.00-2.00 sec 233 MBytes 1.95 Gbits/sec
[ 5] 2.00-3.00 sec 235 MBytes 1.97 Gbits/sec
[ 5] 3.00-4.00 sec 231 MBytes 1.94 Gbits/sec
[ 5] 4.00-5.00 sec 243 MBytes 2.04 Gbits/sec
[ 5] 5.00-6.00 sec 238 MBytes 1.99 Gbits/sec
[ 5] 6.00-7.00 sec 230 MBytes 1.93 Gbits/sec
[ 5] 7.00-8.00 sec 232 MBytes 1.94 Gbits/sec
[ 5] 8.00-9.00 sec 272 MBytes 2.28 Gbits/sec
[ 5] 9.00-10.00 sec 249 MBytes 2.09 Gbits/sec
[ 5] 10.00-10.05 sec 10.4 MBytes 1.90 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.05 sec 2.35 GBytes 2.01 Gbits/sec receiver
[kolga@...-rx200-231 src]$ sudo ./iperf3 -c 172.20.35.191
Connecting to host 172.20.35.191, port 5201
[ 5] local 172.20.35.189 port 37304 connected to 172.20.35.191 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 249 MBytes 2.09 Gbits/sec 4 1.93 MBytes
[ 5] 1.00-2.00 sec 232 MBytes 1.95 Gbits/sec 1 1.69 MBytes
[ 5] 2.00-3.00 sec 234 MBytes 1.96 Gbits/sec 0 2.24 MBytes
[ 5] 3.00-4.00 sec 232 MBytes 1.95 Gbits/sec 0 2.66 MBytes
[ 5] 4.00-5.00 sec 242 MBytes 2.03 Gbits/sec 18 1.32 MBytes
[ 5] 5.00-6.00 sec 238 MBytes 1.99 Gbits/sec 1 1.63 MBytes
[ 5] 6.00-7.00 sec 230 MBytes 1.93 Gbits/sec 0 2.18 MBytes
[ 5] 7.00-8.00 sec 232 MBytes 1.95 Gbits/sec 8 1.38 MBytes
[ 5] 8.00-9.00 sec 272 MBytes 2.29 Gbits/sec 2 1.45 MBytes
[ 5] 9.00-10.00 sec 248 MBytes 2.08 Gbits/sec 2 1.66 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.35 GBytes 2.02 Gbits/sec 36 sender
[ 5] 0.00-10.05 sec 2.35 GBytes 2.01 Gbits/sec receiver
iperf Done.
Top output from -231
Tasks: 412 total, 1 running, 209 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.3 sy, 0.0 ni, 99.2 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 65938632 total, 63195184 free, 2191948 used, 551500 buff/cache
KiB Swap: 33030140 total, 33030140 free, 0 used. 63105388 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
162 root 20 0 0 0 0 I 3.3 0.0 0:00.51 kworker/12+
2530 root 20 0 0 0 0 I 1.7 0.0 0:00.39 kworker/u4+
2742 root 20 0 0 0 0 I 1.3 0.0 0:00.17 kworker/u4+
2394 root 20 0 0 0 0 I 1.0 0.0 0:00.16 kworker/17+
2774 kolga 20 0 158000 4640 3676 R 0.3 0.0 0:00.03 top
1 root 20 0 191980 6436 3900 S 0.0 0.0 0:26.21 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
5 root 20 0 0 0 0 I 0.0 0.0 0:00.02 kworker/0:+
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:+
8 root 20 0 0 0 0 I 0.0 0.0 0:00.00 kworker/u4+
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_+
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
11 root 20 0 0 0 0 I 0.0 0.0 0:00.17 rcu_sched
12 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh
13 root rt 0 0 0 0 S 0.0 0.0 0:00.06 migration/0
Output from -232
Tasks: 400 total, 2 running, 202 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 1.5 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.0 hi, 1.0 si, 0.0 st
KiB Mem : 65932576 total, 63205228 free, 2180492 used, 546856 buff/cache
KiB Swap: 28901372 total, 28901372 free, 0 used. 63114220 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2467 kolga 20 0 43716 4144 3608 R 37.9 0.0 0:03.32 lt-iperf3
1199 root 20 0 0 0 0 D 2.7 0.0 0:00.18 kworker/23+
565 root 20 0 0 0 0 I 1.7 0.0 0:00.23 kworker/2:+
2360 root 20 0 0 0 0 I 1.3 0.0 0:00.16 kworker/u4+
153 root 20 0 0 0 0 I 0.7 0.0 0:00.11 kworker/23+
2273 root 20 0 0 0 0 I 0.7 0.0 0:01.13 kworker/u4+
691 root 20 0 0 0 0 S 0.3 0.0 0:00.20 xfsaild/dm+
2448 root 20 0 0 0 0 I 0.3 0.0 0:00.08 kworker/u4+
1 root 20 0 191820 6108 3804 S 0.0 0.0 0:26.12 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:+
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_+
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
11 root 20 0 0 0 0 I 0.0 0.0 0:00.13 rcu_sched
12 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh
Here's a run where -231 is the server and -232 is the client. A "good"
direction
[kolga@...-rx200-231 src]$ ./iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 172.20.35.191, port 35060
[ 5] local 172.20.35.189 port 5201 connected to 172.20.35.191 port 35062
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.81 GBytes 15.5 Gbits/sec
[ 5] 1.00-2.00 sec 1.90 GBytes 16.3 Gbits/sec
[ 5] 2.00-3.00 sec 1.25 GBytes 10.8 Gbits/sec
[ 5] 3.00-4.00 sec 826 MBytes 6.93 Gbits/sec
[ 5] 4.00-5.00 sec 819 MBytes 6.87 Gbits/sec
[ 5] 5.00-6.00 sec 1.47 GBytes 12.6 Gbits/sec
[ 5] 6.00-7.00 sec 1.79 GBytes 15.4 Gbits/sec
[ 5] 7.00-8.00 sec 1.14 GBytes 9.75 Gbits/sec
[ 5] 8.00-9.00 sec 1.81 GBytes 15.6 Gbits/sec
[ 5] 9.00-10.00 sec 1.77 GBytes 15.2 Gbits/sec
[ 5] 10.00-10.04 sec 78.2 MBytes 16.3 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.04 sec 14.6 GBytes 12.5 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
./iperf3 -c 172.20.35.189
Connecting to host 172.20.35.189, port 5201
[ 5] local 172.20.35.191 port 35062 connected to 172.20.35.189 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.89 GBytes 16.2 Gbits/sec 100 1.45 MBytes
[ 5] 1.00-2.00 sec 1.90 GBytes 16.4 Gbits/sec 86 638 KBytes
[ 5] 2.00-3.00 sec 1.21 GBytes 10.4 Gbits/sec 57 1.15 MBytes
[ 5] 3.00-4.00 sec 830 MBytes 6.96 Gbits/sec 7 1.54 MBytes
[ 5] 4.00-5.00 sec 815 MBytes 6.84 Gbits/sec 15 1.10 MBytes
[ 5] 5.00-6.00 sec 1.51 GBytes 12.9 Gbits/sec 362 690 KBytes
[ 5] 6.00-7.00 sec 1.72 GBytes 14.8 Gbits/sec 665 690 KBytes
[ 5] 7.00-8.00 sec 1.20 GBytes 10.4 Gbits/sec 639 778 KBytes
[ 5] 8.00-9.00 sec 1.81 GBytes 15.6 Gbits/sec 879 708 KBytes
[ 5] 9.00-10.00 sec 1.77 GBytes 15.2 Gbits/sec 865 577 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 14.6 GBytes 12.6 Gbits/sec 3675 sender
[ 5] 0.00-10.04 sec 14.6 GBytes 12.5 Gbits/sec receiver
iperf Done.
-232 top
Tasks: 391 total, 1 running, 202 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 1.4 sy, 0.0 ni, 97.2 id, 0.0 wa, 0.0 hi, 1.4 si, 0.0 st
KiB Mem : 65932576 total, 63203164 free, 2181920 used, 547492 buff/cache
KiB Swap: 28901372 total, 28901372 free, 0 used. 63111764 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2637 kolga 20 0 43716 4476 3944 S 23.6 0.0 0:02.09 lt-iperf3
2510 root 20 0 0 0 0 I 4.3 0.0 0:00.46 kworker/10+
2653 root 20 0 0 0 0 I 3.0 0.0 0:00.16 kworker/1:+
2630 root 20 0 0 0 0 I 1.7 0.0 0:00.14 kworker/u4+
565 root 20 0 0 0 0 I 1.0 0.0 0:00.32 kworker/2:+
2609 root 20 0 0 0 0 I 1.0 0.0 0:00.48 kworker/u4+
2273 root 20 0 0 0 0 I 0.7 0.0 0:01.67 kworker/u4+
25 root 20 0 0 0 0 S 0.3 0.0 0:00.01 ksoftirqd/2
74 root 20 0 0 0 0 S 0.3 0.0 0:00.03 ksoftirqd/+
2651 kolga 20 0 158000 4568 3640 R 0.3 0.0 0:00.03 top
1 root 20 0 191820 6108 3804 S 0.0 0.0 0:26.12 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:+
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_+
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
-231 top
Tasks: 413 total, 3 running, 209 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 3.1 sy, 0.0 ni, 94.7 id, 0.0 wa, 0.0 hi, 2.1 si, 0.0 st
KiB Mem : 65938632 total, 63202896 free, 2183572 used, 552164 buff/cache
KiB Swap: 33030140 total, 33030140 free, 0 used. 63113436 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2859 kolga 20 0 43716 4108 3572 R 70.8 0.0 0:05.75 lt-iperf3
2875 root 20 0 0 0 0 I 7.3 0.0 0:00.36 kworker/9:+
185 root 20 0 0 0 0 I 6.6 0.0 0:00.47 kworker/9:+
68 root 20 0 0 0 0 S 5.0 0.0 0:00.25 ksoftirqd/9
2421 root 20 0 0 0 0 I 3.0 0.0 0:00.10 kworker/13+
2832 root 20 0 0 0 0 I 1.7 0.0 0:00.28 kworker/u4+
2742 root 20 0 0 0 0 I 1.0 0.0 0:00.63 kworker/u4+
2530 root 20 0 0 0 0 I 0.7 0.0 0:01.41 kworker/u4+
2877 kolga 20 0 158000 4712 3740 R 0.3 0.0 0:00.02 top
1 root 20 0 191980 6440 3900 S 0.0 0.0 0:26.23 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
5 root 20 0 0 0 0 I 0.0 0.0 0:00.02 kworker/0:+
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:+
8 root 20 0 0 0 0 I 0.0 0.0 0:00.00 kworker/u4+
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_+
I could attach outputs from ethtool -S (too long to include)
From google about CPU states and disabling them
[kolga@...-rx200-232 ~]$ sudo cat
/sys/module/intel_idle/parameters/max_cstate 9
adding processor.max_cstate=0 and intel_idle.max_cstate=0 to the
kernel boot parameters made is so max_cstate stated at 0.
I re-did the re-experiments and it made no difference.
--
You are receiving this mail because:
You are the assignee for the bug.
Powered by blists - more mailing lists