lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 10 Jul 2016 15:48:56 -0500
From:	Tom Herbert <tom@...bertland.com>
To:	Brenden Blanco <bblanco@...mgrid.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Alexei Starovoitov <alexei.starovoitov@...il.com>,
	"David S. Miller" <davem@...emloft.net>,
	Linux Kernel Network Developers <netdev@...r.kernel.org>,
	Martin KaFai Lau <kafai@...com>,
	Jesper Dangaard Brouer <brouer@...hat.com>,
	Ari Saha <as754m@....com>, Or Gerlitz <gerlitz.or@...il.com>,
	john fastabend <john.fastabend@...il.com>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Thomas Graf <tgraf@...g.ch>,
	Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH v6 12/12] net/mlx4_en: add prefetch in xdp rx path

On Fri, Jul 8, 2016 at 11:49 AM, Brenden Blanco <bblanco@...mgrid.com> wrote:
> On Fri, Jul 08, 2016 at 08:56:45AM +0200, Eric Dumazet wrote:
>> On Thu, 2016-07-07 at 21:16 -0700, Alexei Starovoitov wrote:
>>
>> > I've tried this style of prefetching in the past for normal stack
>> > and it didn't help at all.
>>
>> This is very nice, but my experience showed opposite numbers.
>> So I guess you did not choose the proper prefetch strategy.
>>
>> prefetching in mlx4 gave me good results, once I made sure our compiler
>> was not moving the actual prefetch operations on x86_64 (ie forcing use
>> of asm volatile as in x86_32 instead of the builtin prefetch). You might
>> check if your compiler does the proper thing because this really hurt me
>> in the past.
>>
>> In my case, I was using 40Gbit NIC, and prefetching 128 bytes instead of
>> 64 bytes allowed to remove one stall in GRO engine when using TCP with
>> TS (total header size : 66 bytes), or tunnels.
>>
>> The problem with prefetch is that it works well assuming a given rate
>> (in pps), and given cpus, as prefetch behavior is varying among flavors.
>>
>> Brenden chose to prefetch N+3, based on some experiments, on some
>> hardware,
>>
>> prefetch N+3 can actually slow down if you receive a moderate load,
>> which is the case 99% of the time in typical workloads on modern servers
>> with multi queue NIC.
> Thanks for the feedback Eric!
>
> This particular patch in the series is meant to be standalone exactly
> for this reason. I don't pretend to assert that this optimization will
> work for everybody, or even for a future version of me with different
> hardware. But, it passes my internal criteria for usefulness:
> 1. It provides a measurable gain in the experiments that I have at hand
> 2. The code is easy to review
> 3. The change does not negatively impact non-XDP users
>
> I would love to have a solution for all mlx4 driver users, but this
> patch set is focused on a different goal. So, without munging a
> different set of changes for the universal use case, and probably
> violating criteria #2 or #3, I went with what you see.
>
> In hopes of not derailing the whole patch series, what is an actionable
> next step for this patch #12?
> Ideas:
> Pick a safer N? (I saw improvements with N=1 as well)
> Drop this patch?
>
As Alexei mentioned prefect may be dependent on workload. The XDP
program for an ILA router is is far less code path than packets going
through TCP so it makes sense that we would want different prefetch
characteristics to optimize for the each case. Can we make this a
configurable value for each RX queue?

> One thing I definitely don't want to do is go into the weeds trying to
> get a universal prefetch logic in order to merge the XDP framework, even
> though I agree the net result would benefit everybody.

Agreed, a salient point of XDP is that it's _not_ a generic mechanism.
The performance comparison between XDP should be against HW solutions
that we're trying to replace with commodity HW not the full general
purpose SW stack.
>>
>> This is why it was hard to upstream such changes, because they focus on
>> max throughput instead of low latencies.
>>
>>
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ