lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1271402283.16881.3791.camel@edumazet-laptop>
Date:	Fri, 16 Apr 2010 09:18:03 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	therbert@...gle.com, netdev@...r.kernel.org
Subject: Re: [PATCH v5] rfs: Receive Flow Steering

Le vendredi 16 avril 2010 à 08:56 +0200, Eric Dumazet a écrit :

> I read the patch and found no error.
> 
> I booted a test machine and performed some tests
> 
> I am a bit worried of a tbench regression I am looking at right now.
> 
> if RFS disabled , tbench 16   ->  4408.63 MB/sec 
> 
> 
> # grep . /sys/class/net/lo/queues/rx-0/*
> /sys/class/net/lo/queues/rx-0/rps_cpus:00000000
> /sys/class/net/lo/queues/rx-0/rps_flow_cnt:8192
> # cat /proc/sys/net/core/rps_sock_flow_entries
> 8192
> 
> 
> echo ffff >/sys/class/net/lo/queues/rx-0/rps_cpus
> 
> tbench 16 -> 2336.32 MB/sec
> 
> 
> -----------------------------------------------------------------------------------------------------------------------------------------------------
>    PerfTop:   14561 irqs/sec  kernel:86.3% [1000Hz cycles],  (all, 16 CPUs)
> -----------------------------------------------------------------------------------------------------------------------------------------------------
> 
>              samples  pcnt function                       DSO
>              _______ _____ ______________________________ __________________________________________________________
> 
>              2664.00  5.1% copy_user_generic_string       /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>              2323.00  4.4% acpi_os_read_port              /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>              1641.00  3.1% _raw_spin_lock_irqsave         /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>              1260.00  2.4% schedule                       /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>              1159.00  2.2% _raw_spin_lock                 /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>              1051.00  2.0% tcp_ack                        /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               991.00  1.9% tcp_sendmsg                    /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               922.00  1.8% tcp_recvmsg                    /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               821.00  1.6% child_run                      /usr/bin/tbench                                           
>               766.00  1.5% all_string_sub                 /usr/bin/tbench                                           
>               630.00  1.2% __switch_to                    /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               608.00  1.2% __GI_strchr                    /lib/tls/libc-2.3.4.so                                    
>               606.00  1.2% ipt_do_table                   /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               600.00  1.1% __GI_strstr                    /lib/tls/libc-2.3.4.so                                    
>               556.00  1.1% __netif_receive_skb            /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               504.00  1.0% tcp_transmit_skb               /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               502.00  1.0% tick_nohz_stop_sched_tick      /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               481.00  0.9% _raw_spin_unlock_irqrestore    /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               473.00  0.9% next_token                     /usr/bin/tbench                                           
>               449.00  0.9% ip_rcv                         /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               423.00  0.8% call_function_single_interrupt /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               422.00  0.8% ia32_sysenter_target           /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               420.00  0.8% compat_sys_socketcall          /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               401.00  0.8% mod_timer                      /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               400.00  0.8% process_backlog                /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               399.00  0.8% ip_queue_xmit                  /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               387.00  0.7% select_task_rq_fair            /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               377.00  0.7% _raw_spin_lock_bh              /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
>               360.00  0.7% tcp_v4_rcv                     /lib/modules/2.6.34-rc3-03375-ga4fbf84-dirty/build/vmlinux
> 
> But if RFS is on, why activating rps_cpus change tbench ?
> 

Hmm, I wonder if its not an artifact of net-next-2.6 being a bit old
(versus linux-2.6). I know scheduler guys did some tweaks.

Because apparently, some cpus are idle part of their time (30% ???)

Or a new bug on cpu accounting, reporting idle time while cpus are
busy....

# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
16  0      0 5670264  13280  63392    0    0     2     1 1512  227 12 47 41  0
18  0      0 5669396  13280  63392    0    0     0     0 657952 1606102 14 58 28  0
17  0      0 5668776  13288  63392    0    0     0    12 656701 1606369 14 58 28  0
18  0      0 5669644  13288  63392    0    0     0     0 657636 1603960 15 57 28  0
17  0      0 5670900  13288  63392    0    0     0     0 666425 1584847 15 56 29  0
15  0      0 5669164  13288  63392    0    0     0     0 682578 1472616 14 56 30  0
16  0      0 5669412  13288  63392    0    0     0     0 695767 1506302 14 54 32  0
14  0      0 5668916  13296  63396    0    0     4   148 685286 1482897 14 56 30  0
17  0      0 5669784  13296  63396    0    0     0     0 683910 1477994 14 56 30  0
18  0      0 5670032  13296  63396    0    0     0     0 692023 1497195 14 55 31  0
16  0      0 5669040  13296  63396    0    0     0     0 677477 1468157 14 56 30  0
16  0      0 5668916  13312  63396    0    0     0    32 489358 1048553 14 57 30  0
18  0      0 5667924  13320  63396    0    0     0    12 424787 897145 15 55 29  0

RFS off :

# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
24  0      0 5669624  13632  63476    0    0     2     1  261   82 12 48 40  0
26  0      0 5669492  13632  63476    0    0     0     0 4223 1740651 21 71  7  0
23  0      0 5669864  13640  63476    0    0     0    12 4205 1731882 21 71  8  0
23  0      0 5670484  13640  63476    0    0     0     0 4176 1733448 21 71  8  0
24  0      0 5670588  13640  63476    0    0     0     0 4176 1733845 21 72  7  0
21  0      0 5671084  13640  63476    0    0     0     0 4200 1734990 20 73  7  0
23  0      0 5671580  13640  63476    0    0     0     0 4168 1735100 21 71  8  0
23  0      0 5671704  13640  63480    0    0     4   132 4221 1733428 21 72  7  0
22  0      0 5671952  13640  63480    0    0     0     0 4190 1730370 21 72  8  0
20  0      0 5672292  13640  63480    0    0     0     0 4212 1732084 22 70  8  0



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ