[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210201110634.70be00ba@carbon>
Date: Mon, 1 Feb 2021 11:06:34 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org, davem@...emloft.net,
kuba@...nel.org, ast@...nel.org, daniel@...earbox.net,
toshiaki.makita1@...il.com, lorenzo.bianconi@...hat.com,
toke@...hat.com, Stefano Brivio <sbrivio@...hat.com>,
brouer@...hat.com
Subject: Re: [PATCH v2 bpf-next] net: veth: alloc skb in bulk for
ndo_xdp_xmit
On Fri, 29 Jan 2021 22:49:27 +0100
Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> On Jan 29, Lorenzo Bianconi wrote:
> > On Jan 29, Jesper Dangaard Brouer wrote:
> > > On Fri, 29 Jan 2021 17:02:16 +0100
> > > Jesper Dangaard Brouer <brouer@...hat.com> wrote:
> > >
> > > > > + for (i = 0; i < n_skb; i++) {
> > > > > + struct sk_buff *skb = skbs[i];
> > > > > +
> > > > > + memset(skb, 0, offsetof(struct sk_buff, tail));
> > > >
> > > > It is very subtle, but the memset operation on Intel CPU translates
> > > > into a "rep stos" (repeated store) operation. This operation need to
> > > > save CPU-flags (to support being interrupted) thus it is actually
> > > > expensive (and in my experience cause side effects on pipeline
> > > > efficiency). I have a kernel module for testing memset here[1].
> > > >
> > > > In CPUMAP I have moved the clearing outside this loop. But via asking
> > > > the MM system to clear the memory via gfp_t flag __GFP_ZERO. This
> > > > cause us to clear more memory 256 bytes, but it is aligned. Above
> > > > offsetof(struct sk_buff, tail) is 188 bytes, which is unaligned making
> > > > the rep-stos more expensive in setup time. It is below 3-cachelines,
> > > > which is actually interesting and an improvement since last I checked.
> > > > I actually have to re-test with time_bench_memset[1], to know that is
> > > > better now.
> > >
> > > After much testing (with [1]), yes please use gfp_t flag __GFP_ZERO.
> >
> > I run some comparison tests using memset and __GFP_ZERO and with VETH_XDP_BATCH
> > set to 8 and 16. Results are pretty close so not completely sure the delta is
> > just a noise:
> >
> > - VETH_XDP_BATCH= 8 + __GFP_ZERO: ~3.737Mpps
> > - VETH_XDP_BATCH= 16 + __GFP_ZERO: ~3.79Mpps
> > - VETH_XDP_BATCH= 8 + memset: ~3.766Mpps
> > - VETH_XDP_BATCH= 16 + __GFP_ZERO: ~3.765Mpps
>
> Sorry last line is:
> - VETH_XDP_BATCH= 16 + memset: ~3.765Mpps
Thanks for doing these benchmarks.
From my memset benchmarks we are looking for a 1.66 ns difference(10.463-8.803),
which is VERY hard to measure accurately (anything below 2 ns is
extremely hard due to OS noise).
VETH_XDP_BATCH=8 __GFP_ZERO (3.737Mpps) -> memset (3.766Mpps)
- __GFP_ZERO loosing 0.029Mpps and 2.06 ns slower
VETH_XDP_BATCH=16 __GFP_ZERO (3.79Mpps) -> memset (3.765Mpps)
- __GFP_ZERO gaining 0.025Mpps and 1.75 ns faster
I would say this is noise in the measurements. Even-though batch=16
match the expected improvement, batch=8 goes in the other direction.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists