[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D8CPGEGQ4630.2MKAQH44PFCCO@arthurfabre.com>
Date: Mon, 10 Mar 2025 16:52:30 +0100
From: "Arthur Fabre" <arthur@...hurfabre.com>
To: "Lorenzo Bianconi" <lorenzo.bianconi@...hat.com>
Cc: <netdev@...r.kernel.org>, <bpf@...r.kernel.org>, <jakub@...udflare.com>,
<hawk@...nel.org>, <yan@...udflare.com>, <jbrandeburg@...udflare.com>,
<thoiland@...hat.com>, <lbiancon@...hat.com>, "Arthur Fabre"
<afabre@...udflare.com>
Subject: Re: [PATCH RFC bpf-next 05/20] trait: Replace memcpy calls with
inline copies
On Mon Mar 10, 2025 at 11:50 AM CET, Lorenzo Bianconi wrote:
> > From: Arthur Fabre <afabre@...udflare.com>
> >
> > When copying trait values to or from the caller, the size isn't a
> > constant so memcpy() ends up being a function call.
> >
> > Replace it with an inline implementation that only handles the sizes we
> > support.
> >
> > We store values "packed", so they won't necessarily be 4 or 8 byte
> > aligned.
> >
> > Setting and getting traits is roughly ~40% faster.
>
> Nice! I guess in a formal series this patch can be squashed with patch 1/20
> (adding some comments).
Happy to squash and add comments instead if that's better :)
>
> Regards,
> Lorenzo
>
> >
> > Signed-off-by: Arthur Fabre <afabre@...udflare.com>
> > ---
> > include/net/trait.h | 25 +++++++++++++++++++------
> > 1 file changed, 19 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/net/trait.h b/include/net/trait.h
> > index 536b8a17dbbc091b4d1a4d7b4b21c1e36adea86a..d4581a877bd57a32e2ad032147c906764d6d37f8 100644
> > --- a/include/net/trait.h
> > +++ b/include/net/trait.h
> > @@ -7,6 +7,7 @@
> > #include <linux/errno.h>
> > #include <linux/string.h>
> > #include <linux/bitops.h>
> > +#include <linux/unaligned.h>
> >
> > /* Traits are a very limited KV store, with:
> > * - 64 keys (0-63).
> > @@ -145,23 +146,23 @@ int trait_set(void *traits, void *hard_end, u64 key, const void *val, u64 len, u
> > memmove(traits + off + len, traits + off, traits_size(traits) - off);
> > }
> >
> > - /* Set our value. */
> > - memcpy(traits + off, val, len);
> > -
> > - /* Store our length in header. */
> > u64 encode_len = 0;
> > -
> > switch (len) {
> > case 2:
> > + /* Values are least two bytes, so they'll be two byte aligned */
> > + *(u16 *)(traits + off) = *(u16 *)val;
> > encode_len = 1;
> > break;
> > case 4:
> > + put_unaligned(*(u32 *)val, (u32 *)(traits + off));
> > encode_len = 2;
> > break;
> > case 8:
> > + put_unaligned(*(u64 *)val, (u64 *)(traits + off));
> > encode_len = 3;
> > break;
> > }
> > +
> > h->high |= (encode_len >> 1) << key;
> > h->low |= (encode_len & 1) << key;
> > return 0;
> > @@ -201,7 +202,19 @@ int trait_get(void *traits, u64 key, void *val, u64 val_len)
> > if (real_len > val_len)
> > return -ENOSPC;
> >
> > - memcpy(val, traits + off, real_len);
> > + switch (real_len) {
> > + case 2:
> > + /* Values are least two bytes, so they'll be two byte aligned */
> > + *(u16 *)val = *(u16 *)(traits + off);
> > + break;
> > + case 4:
> > + *(u32 *)val = get_unaligned((u32 *)(traits + off));
> > + break;
> > + case 8:
> > + *(u64 *)val = get_unaligned((u64 *)(traits + off));
> > + break;
> > + }
> > +
> > return real_len;
> > }
> >
> >
> > --
> > 2.43.0
> >
> >
Powered by blists - more mailing lists