[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210129214640.GB20729@lore-desk>
Date: Fri, 29 Jan 2021 22:46:40 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org, davem@...emloft.net,
kuba@...nel.org, ast@...nel.org, daniel@...earbox.net,
toshiaki.makita1@...il.com, lorenzo.bianconi@...hat.com,
toke@...hat.com, Stefano Brivio <sbrivio@...hat.com>
Subject: Re: [PATCH v2 bpf-next] net: veth: alloc skb in bulk for ndo_xdp_xmit
On Jan 29, Jesper Dangaard Brouer wrote:
> On Fri, 29 Jan 2021 17:02:16 +0100
> Jesper Dangaard Brouer <brouer@...hat.com> wrote:
>
> > > + for (i = 0; i < n_skb; i++) {
> > > + struct sk_buff *skb = skbs[i];
> > > +
> > > + memset(skb, 0, offsetof(struct sk_buff, tail));
> >
> > It is very subtle, but the memset operation on Intel CPU translates
> > into a "rep stos" (repeated store) operation. This operation need to
> > save CPU-flags (to support being interrupted) thus it is actually
> > expensive (and in my experience cause side effects on pipeline
> > efficiency). I have a kernel module for testing memset here[1].
> >
> > In CPUMAP I have moved the clearing outside this loop. But via asking
> > the MM system to clear the memory via gfp_t flag __GFP_ZERO. This
> > cause us to clear more memory 256 bytes, but it is aligned. Above
> > offsetof(struct sk_buff, tail) is 188 bytes, which is unaligned making
> > the rep-stos more expensive in setup time. It is below 3-cachelines,
> > which is actually interesting and an improvement since last I checked.
> > I actually have to re-test with time_bench_memset[1], to know that is
> > better now.
>
> After much testing (with [1]), yes please use gfp_t flag __GFP_ZERO.
I run some comparison tests using memset and __GFP_ZERO and with VETH_XDP_BATCH
set to 8 and 16. Results are pretty close so not completely sure the delta is
just a noise:
- VETH_XDP_BATCH= 8 + __GFP_ZERO: ~3.737Mpps
- VETH_XDP_BATCH= 16 + __GFP_ZERO: ~3.79Mpps
- VETH_XDP_BATCH= 8 + memset: ~3.766Mpps
- VETH_XDP_BATCH= 16 + __GFP_ZERO: ~3.765Mpps
Regards,
Lorenzo
>
> SKB: offsetof-tail:184 bytes
> - memset_skb_tail Per elem: 37 cycles(tsc) 10.463 ns
>
> SKB: ROUNDUP(offsetof-tail: 192)
> - memset_skb_tail_roundup Per elem: 37 cycles(tsc) 10.468 ns
>
> I though it would be better/faster to round up to full cachelines, but
> measurements show that the cost was the same for 184 vs 192. It does
> validate the theory that it is the cacheline boundary that is important.
>
> When doing the gfp_t flag __GFP_ZERO, the kernel cannot know the
> constant size, and instead end up calling memset_erms().
>
> The cost of memset_erms(256) is:
> - memset_variable_step(256) Per elem: 31 cycles(tsc) 8.803 ns
>
> The const version with 256 that uses rep-stos cost more:
> - memset_256 Per elem: 41 cycles(tsc) 11.552 ns
>
>
> Below not relevant for your patch, but an interesting data point is
> that memset_erms(512) only cost 4 cycles more:
> - memset_variable_step(512) Per elem: 35 cycles(tsc) 9.893 ns
>
> (but don't use rep-stos for const 512 it is 72 cycles(tsc) 20.069 ns.)
>
> [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/time_bench_memset.c
> CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists