lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1323177878.2448.18.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC>
Date:	Tue, 06 Dec 2011 14:24:38 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Ian Campbell <Ian.Campbell@...rix.com>
Cc:	David Miller <davem@...emloft.net>,
	Jesse Brandeburg <jesse.brandeburg@...el.com>,
	netdev@...r.kernel.org
Subject: Re: [PATCH 0/4] skb paged fragment destructors

Le mardi 06 décembre 2011 à 11:57 +0000, Ian Campbell a écrit :
> On Wed, 2011-11-09 at 15:01 +0000, Ian Campbell wrote:
> >       * split linear data allocation and shinfo allocation into two. I
> >         suspect this will have its own performance implications? On the
> >         positive side skb_shared_info could come from its own fixed size
> >         pool/cache which might have some benefits
> 
> I played with this to see how it would look. Illustrative patch below. 
> 
> I figure that lots of small frames is the interesting workload for a
> change such as this but I don't know if iperf is necessarily the best
> benchmark for measuring that.
> Before changing things I got:
>         iperf -c qarun -m -t 60 -u -b 10000M -l 64
>         ------------------------------------------------------------
>         Client connecting to qarun, UDP port 5001
>         Sending 64 byte datagrams
>         UDP buffer size:   224 KByte (default)
>         ------------------------------------------------------------
>         [  3] local 10.80.225.63 port 45857 connected with 10.80.224.22 port 5001
>         [ ID] Interval       Transfer     Bandwidth
>         [  3]  0.0-60.0 sec    844 MBytes    118 Mbits/sec
>         [  3] Sent 13820376 datagrams
>         [  3] Server Report:
>         [  3]  0.0-60.0 sec    844 MBytes    118 Mbits/sec  0.005 ms    0/13820375 (0%)
>         [  3]  0.0-60.0 sec  1 datagrams received out-of-order
> whereas with the patch:
>         # iperf -c qarun -m -t 60 -u -b 10000M -l 64
>         ------------------------------------------------------------
>         Client connecting to qarun, UDP port 5001
>         Sending 64 byte datagrams
>         UDP buffer size:   224 KByte (default)
>         ------------------------------------------------------------
>         [  3] local 10.80.225.63 port 42504 connected with 10.80.224.22 port 5001
>         [ ID] Interval       Transfer     Bandwidth
>         [  3]  0.0-60.0 sec    833 MBytes    116 Mbits/sec
>         [  3] Sent 13645857 datagrams
>         [  3] Server Report:
>         [  3]  0.0-60.0 sec    833 MBytes    116 Mbits/sec  0.005 ms    0/13645856 (0%)
>         [  3]  0.0-60.0 sec  1 datagrams received out-of-order
> 
> With 1200 byte datagrams I get basically identical throughput.
> 
> (nb: none of the skb destructor stuff was present in either case)

Sorry, but the real problem is that if skb producer and consumer are not
on same CPU, each skb will now hit SLUB slowpath three times instead of
two.

Some workloads are : One cpu fully handling IRQ from device, dispatching
skbs to consumers on other cpus.

Plus skb->truesize is wrong after your patch.
Not sure if cloning is correct either...

Anyway, do we _really_ need 16 frags per skb, I dont know....

This gives problems when/if skb must be linearized and we hit
PAGE_ALLOC_COSTLY_ORDER

Alternatively, we could use order-1 or order-2 pages on x86 to get
8192/16384 bytes frags. (fallback to order-0 pages in case of allocation
failures)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ