lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 9 Oct 2016 19:48:33 +0000
From:   "Chopra, Manish" <Manish.Chopra@...ium.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "maorg@...lanox.com" <maorg@...lanox.com>,
        "tom@...bertland.com" <tom@...bertland.com>
Subject: RE: Accelerated receive flow steering (aRFS) for UDP

> -----Original Message-----
> From: Eric Dumazet [mailto:eric.dumazet@...il.com]
> Sent: Sunday, October 09, 2016 10:45 PM
> To: Chopra, Manish <Manish.Chopra@...ium.com>
> Cc: netdev@...r.kernel.org; maorg@...lanox.com; tom@...bertland.com
> Subject: Re: Accelerated receive flow steering (aRFS) for UDP
> 
> On Sat, 2016-10-08 at 12:25 +0000, Chopra, Manish wrote:
> > > -----Original Message-----
> > > From: Eric Dumazet [mailto:eric.dumazet@...il.com]
> > > Sent: Saturday, October 08, 2016 5:08 AM
> > > To: Chopra, Manish <Manish.Chopra@...ium.com>
> > > Cc: netdev@...r.kernel.org; maorg@...lanox.com; tom@...bertland.com
> > > Subject: Re: Accelerated receive flow steering (aRFS) for UDP
> > >
> > > On Fri, 2016-10-07 at 22:55 +0000, Chopra, Manish wrote:
> > > > Hello Folks,
> > > >
> > > > I am experimenting aRFS with our NIC devices, and for that I have
> > > > kernel 4.8.x installed with below config.
> > > >
> > > > CONFIG_RPS=y
> > > > CONFIG_RFS_ACCEL=y
> > > >
> > > > # cat /proc/cpuinfo  | grep processor
> > > > processor       : 0
> > > > processor       : 1
> > > > processor       : 2
> > > > processor       : 3
> > > > processor       : 4
> > > > processor       : 5
> > > > processor       : 6
> > > > processor       : 7
> > > > processor       : 8
> > > > processor       : 9
> > > > processor       : 10
> > > > processor       : 11
> > > > processor       : 12
> > > > processor       : 13
> > > > processor       : 14
> > > > processor       : 15
> > > >
> > > > I configured rps_sock_flow_entries  and our NIC rx queues with below
> > > > values
> > > >
> > > > echo 32768 > /proc/sys/net/core/rps_sock_flow_entries
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-0/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-1/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-2/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-3/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-4/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-5/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-6/rps_flow_cnt
> > > > echo 4096 > /sys/class/net/p4p1/queues/rx-7/rps_flow_cnt
> > > >
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-0/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-1/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-2/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-3/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-4/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-5/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-6/rps_cpus
> > > > echo ffff > /sys/class/net/p4p1/queues/rx-7/rps_cpus
> > > >
> > > > Below is IRQ affinity configuration for NIC irqs used.
> > > >
> > > > # cat /proc/irq/67/smp_affinity_list
> > > > 8
> > > > # cat /proc/irq/68/smp_affinity_list
> > > > 9
> > > > # cat /proc/irq/69/smp_affinity_list
> > > > 10
> > > > # cat /proc/irq/70/smp_affinity_list
> > > > 11
> > > > # cat /proc/irq/71/smp_affinity_list
> > > > 12
> > > > # cat /proc/irq/72/smp_affinity_list
> > > > 13
> > > > # cat /proc/irq/73/smp_affinity_list
> > > > 14
> > > > # cat /proc/irq/74/smp_affinity_list
> > > > 15
> > > >
> > > > Driver has required feature NETIF_F_NTUPLE set, ndo_rx_flow_steer()
> > > > registered and I am running UDP multiple connections stream using
> > > > netperf to the host where I am experimenting aRFS.
> > > >
> > > > # netperf -V
> > > > Netperf version 2.7.0
> > > >
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 8,8 -- -m 1470 -P
> > > > 5001,48512 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 9,9 -- -m 1470 -P
> > > > 5001,37990 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 10,10 -- -m 1470 -P
> > > > 5001,40302 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 11,11 -- -m 1470 -P
> > > > 5001,39071 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 12,12 -- -m 1470 -P
> > > > 5001,58994 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 13,13 -- -m 1470 -P
> > > > 5001,59884 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 14,14 -- -m 1470 -P
> > > > 5001,40282 &
> > > > netperf -H 192.168.200.40 -t UDP_STREAM -l 150 -T 15,15 -- -m 1470 -P
> > > > 5001,56042 &
> > > >
> > > > I see that our registered callback for ndo_rx_flow_steer() "NEVER"
> > > > gets invoked for UDP packets, with TCP_STREAM I do see it gets
> > > > invoked.
> > > > But while running UDP_STREAM I see it gets invoked for some of TCP
> > > > packets as netperf also uses TCP managed connections while running
> > > > UDP_STREAM.
> > > >
> > > > My initial investigation suspects that while running UDP_STREAM with
> > > > netperf, rps_sock_flow_table doesn't get updated, as packets never
> > > > reach to the flow of inet_recvmsg()
> > > > where it gets updated using sock_rps_record_flow(). Which might be the
> > > > reason it never invokes NIC's flow steering handler ?
> > > >
> > > > Please note that when I run UDP stream using "iperf" - I do see that
> > > > our registered callback function for flow steering gets invoked for
> > > > "UDP" packets.
> > > > I am not sure if I am missing something in configuration or something
> > > > else which is I am unware of  ?
> > > >
> > > > I appreciate any help for this.
> > >
> > > Make sure you use connected UDP flows
> > >
> > >
> > > netperf -t UDP_STREAM ... -- -N -n
> > >
> > > Otherwise, one UDP socket can be involved in millions of 4-tuples (aka
> > > flows)
> > >
> > >
> >
> > Hi Eric, I tried with it but it doesn't help with the said issue. I still don't see that
> our registered handler for flow steering (ndo_rx_flow_steer()) is getting
> > invoked for UDP packets [ip->protocol = IPPROTO_UDP] .All it gets invoked for
> TCP packets only [ip->protocol = IPPROTO_TCP]
> >
> > netperf -H $1 -t UDP_STREAM -l 150 -T 8,8 -- -N -m 1470 -P 5001,48512 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 9,9 -- -N -m 1470 -P 5001,37990 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 10,10 -- -N -m 1470 -P 5001,40302 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 11,11 -- -N -m 1470 -P 5001,39071 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 12,12 -- -N -m 1470 -P 5001,58994 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 13,13 -- -N -m 1470 -P 5001,59884 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 14,14 -- -N -m 1470 -P 5001,40282 &
> > netperf -H $1 -t UDP_STREAM -l 150 -T 15,15 -- -N -m 1470 -P 5001,56042 &
> >
> > Thanks !!
> 
> Please carefully read what I wrote, and carefully read netperf
> documentation.
> 
> When you add -n option to netperf, it _will_ use connected UDP sockets
> and your problem will vanish.
> 
> You added '-N' only, which is useless for UDP_STREAM.
> 
> It might help for UDP_RR, but not UDP_STREAM.
> 
> 
> 

Hi Eric, I used "-n" as well with "-N" but still the problem doesn't go away.

This is what I have done -

Started "netserver" on local/test setup

#netserver
Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family AF_UNSPEC

It starts listening on port "12865"

From remote setup, started multiple netperf using different ports for data sockets specified using "-P" with "-N" and "-n" options specified as well.
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 8,8 -- -N -n -m 1400 -P 6660,5550 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 9,9 -- -N -n -m 1400 -P 9990,9880 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 10,10 -- -N -n -m 1400 -P 4455,4400 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 11,11 -- -N -n -m 1400 -P 3300,7800 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 12,12 -- -N -n -m 1400 -P 50512,44444 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 13,13 -- -N -n -m 1400 -P 10512,45672 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 14,14 -- -N -n -m 1400 -P 8888,56721 &
netperf -H 192.168.200.40 -l 150 -t UDP_STREAM -T 15,15 -- -N -n -m 1400 -P 9300,8899 &

When on local/test receiving setup, I dump skb's IP header protocol field in .ndo_rx_flow_steer() handler - it is still always IPPROTO_TCP.
Which has destined port 12865. But that handler never receives a SKB whose IP header protocol field is set to IPPROTO_UDP.

As suspected, I believe in receive flow, packets always go in the path where it never match any entry in global flow table in get_rps_cpu() function
,possibly due to packets don't get received from the flow of inet_recvmsg() which updates the global flow table ?

3571                 /* First check into global flow table if there is a match */
3572                 ident = sock_flow_table->ents[hash & sock_flow_table->mask];
3573                 if ((ident ^ hash) & ~rps_cpu_mask)
3574                         goto try_rps;

Hence, it never call set_rps_cpu() which internally is supposed to call .ndo_rx_flow_steer() for the SKB's whose flows to be steered.

On another side, when I use "Iperf" for sending UDP stream, which I believe receives the packets from the intet_recvmsg() flow
and I do see flows getting steered for UDP packets. [Actually seeing SKB's whose IP header protocol set to IPPROTO_UDP arriving in .ndo-rx_flow_steer()].

iperf -s -u
iperf -u -c 192.168.200.40 -t 3000 -i 10 -P 8
 
Thanks,
Manish





Powered by blists - more mailing lists