lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 Sep 2015 18:49:23 -0700
From:	Alexander Duyck <alexander.duyck@...il.com>
To:	Alexander Duyck <alexander.h.duyck@...hat.com>,
	netdev@...r.kernel.org, intel-wired-lan@...ts.osuosl.org
Subject: Re: [Intel-wired-lan] [PATCH] ixgbe: Limit lowest interrupt rate for
 adaptive interrupt moderation to 12K

On 07/30/2015 03:19 PM, Alexander Duyck wrote:
> This patch updates the lowest limit for adaptive interrupt interrupt
> moderation to roughly 12K interrupts per second.
>
> The way I came about reaching 12K as the desired interrupt rate is by
> testing with UDP flows.  Specifically I had a simple test that ran a
> netperf UDP_STREAM test at varying sizes.  What I found was as the packet
> sizes increased the performance fell steadily behind until we were only
> able to receive at ~4Gb/s with a message size of 65507.  A bit of digging
> found that we were dropping packets for the socket in the network stack,
> and looking at things further what I found was I could solve it by increasing
> the interrupt rate, or increasing the rmem_default/rmem_max.  What I found was
> that when the interrupt coalescing resulted in more data being processed
> per interrupt than could be stored in the socket buffer we started losing
> packets and the performance dropped.  So I reached 12K based on the
> following math.
>
> rmem_default = 212992
> skb->truesize = 2994
> 212992 / 2994 = 71.14 packets to fill the buffer
>
> packet rate at 1514 packet size is 812744pps
> 71.14 / 812744 = 87.9us to fill socket buffer
>
> >From there it was just a matter of choosing the interrupt rate and
> providing a bit of wiggle room which is why I decided to go with 12K
> interrupts per second as that uses a value of 84us.
>
> The data below is based on VM to VM over a direct assigned ixgbe interface.
> The test run was:
> 	netperf -H <ip> -t UDP_STREAM"
>
> Socket  Message  Elapsed      Messages                   CPU      Service
> Size    Size     Time         Okay Errors   Throughput   Util     Demand
> bytes   bytes    secs            #      #   10^6bits/sec % SS     us/KB
> Before:
> 212992   65507   60.00     1100662      0     9613.4     10.89    0.557
> 212992           60.00      473474            4135.4     11.27    0.576
>
> After:
> 212992   65507   60.00     1100413      0     9611.2     10.73    0.549
> 212992           60.00      974132            8508.3     11.69    0.598
>
> Using bare metal the data is similar but not as dramatic as the throughput
> increases from about 8.5Gb/s to 9.5Gb/s.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...hat.com>

Has there been any update on this patch?  I submitted it just over a 
month ago now and it hasn't received any feedback.  I was hoping this 
could be submitted before the merge window closes for net-next.

Thanks.

- Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists