[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160708164939.GA30632@gmail.com>
Date: Fri, 8 Jul 2016 09:49:40 -0700
From: Brenden Blanco <bblanco@...mgrid.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Alexei Starovoitov <alexei.starovoitov@...il.com>,
davem@...emloft.net, netdev@...r.kernel.org,
Martin KaFai Lau <kafai@...com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Ari Saha <as754m@....com>, Or Gerlitz <gerlitz.or@...il.com>,
john.fastabend@...il.com, hannes@...essinduktion.org,
Thomas Graf <tgraf@...g.ch>, Tom Herbert <tom@...bertland.com>,
Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH v6 12/12] net/mlx4_en: add prefetch in xdp rx path
On Fri, Jul 08, 2016 at 08:56:45AM +0200, Eric Dumazet wrote:
> On Thu, 2016-07-07 at 21:16 -0700, Alexei Starovoitov wrote:
>
> > I've tried this style of prefetching in the past for normal stack
> > and it didn't help at all.
>
> This is very nice, but my experience showed opposite numbers.
> So I guess you did not choose the proper prefetch strategy.
>
> prefetching in mlx4 gave me good results, once I made sure our compiler
> was not moving the actual prefetch operations on x86_64 (ie forcing use
> of asm volatile as in x86_32 instead of the builtin prefetch). You might
> check if your compiler does the proper thing because this really hurt me
> in the past.
>
> In my case, I was using 40Gbit NIC, and prefetching 128 bytes instead of
> 64 bytes allowed to remove one stall in GRO engine when using TCP with
> TS (total header size : 66 bytes), or tunnels.
>
> The problem with prefetch is that it works well assuming a given rate
> (in pps), and given cpus, as prefetch behavior is varying among flavors.
>
> Brenden chose to prefetch N+3, based on some experiments, on some
> hardware,
>
> prefetch N+3 can actually slow down if you receive a moderate load,
> which is the case 99% of the time in typical workloads on modern servers
> with multi queue NIC.
Thanks for the feedback Eric!
This particular patch in the series is meant to be standalone exactly
for this reason. I don't pretend to assert that this optimization will
work for everybody, or even for a future version of me with different
hardware. But, it passes my internal criteria for usefulness:
1. It provides a measurable gain in the experiments that I have at hand
2. The code is easy to review
3. The change does not negatively impact non-XDP users
I would love to have a solution for all mlx4 driver users, but this
patch set is focused on a different goal. So, without munging a
different set of changes for the universal use case, and probably
violating criteria #2 or #3, I went with what you see.
In hopes of not derailing the whole patch series, what is an actionable
next step for this patch #12?
Ideas:
Pick a safer N? (I saw improvements with N=1 as well)
Drop this patch?
One thing I definitely don't want to do is go into the weeds trying to
get a universal prefetch logic in order to merge the XDP framework, even
though I agree the net result would benefit everybody.
>
> This is why it was hard to upstream such changes, because they focus on
> max throughput instead of low latencies.
>
>
>
Powered by blists - more mailing lists