lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1c96bbf3-0edd-40f1-91a2-db7800a47f0d@kernel.org>
Date: Thu, 1 May 2025 16:03:44 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
 Jakub Sitnicki <jakub@...udflare.com>,
 Alexei Starovoitov <alexei.starovoitov@...il.com>,
 Arthur Fabre <arthur@...hurfabre.com>
Cc: Network Development <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
 Yan Zhai <yan@...udflare.com>, jbrandeburg@...udflare.com,
 lbiancon@...hat.com, Alexei Starovoitov <ast@...nel.org>,
 Jakub Kicinski <kuba@...nel.org>, Eric Dumazet <edumazet@...gle.com>,
 kernel-team@...udflare.com
Subject: Re: [PATCH RFC bpf-next v2 01/17] trait: limited KV store for packet
 metadata



On 01/05/2025 12.43, Toke Høiland-Jørgensen wrote:
> Jakub Sitnicki <jakub@...udflare.com> writes:
> 
>> On Wed, Apr 30, 2025 at 11:19 AM +02, Toke Høiland-Jørgensen wrote:
>>> Alexei Starovoitov <alexei.starovoitov@...il.com> writes:
>>>
>>>> On Fri, Apr 25, 2025 at 12:27 PM Arthur Fabre <arthur@...hurfabre.com> wrote:
>>>>>
>>>>> On Thu Apr 24, 2025 at 6:22 PM CEST, Alexei Starovoitov wrote:
>>>>>> On Tue, Apr 22, 2025 at 6:23 AM Arthur Fabre <arthur@...hurfabre.com> wrote:
>>
>> [...]
>>
>>>>> * Hardware metadata: metadata exposed from NICs (like the receive
>>>>>    timestamp, 4 tuple hash...) is currently only exposed to XDP programs
>>>>>    (via kfuncs).
>>>>>    But that doesn't expose them to the rest of the stack.
>>>>>    Storing them in traits would allow XDP, other BPF programs, and the
>>>>>    kernel to access and modify them (for example to into account
>>>>>    decapsulating a packet).
>>>>
>>>> Sure. If traits == existing metadata bpf prog in xdp can communicate
>>>> with bpf prog in skb layer via that "trait" format.
>>>> xdp can take tuple hash and store it as key==0 in the trait.
>>>> The kernel doesn't need to know how to parse that format.
>>>
>>> Yes it does, to propagate it to the skb later. I.e.,
>>>
>>> XDP prog on NIC: get HW hash, store in traits, redirect to CPUMAP
>>> CPUMAP: build skb, read hash from traits, populate skb hash
>>>
>>> Same thing for (at least) timestamps and checksums.
>>>
>>> Longer term, with traits available we could move more skb fields into
>>> traits to make struct sk_buff smaller (by moving optional fields to
>>> traits that don't take up any space if they're not set).

Above paragraph is very significant IMHO.  Netstack have many fields in
the SKB that are only used in corner cases.  There is a huge opportunity
for making these fields optional, without taking a performance hit (via
SKB extensions). To me the traits area is simply a new dynamic struct
type available for the *kernel*.

INCEPTION: Giving NIC drivers a writable memory area before SKB
allocation opens up the possibility of avoiding SKB allocation in the
driver entirely. Hardware offload metadata can be written directly into
the traits area. Later, when the core netstack allocates the SKB, it can
extract this data from traits.

The performance implications here are significant: this effectively
brings XDP-style pre-SKB processing into the netstack core. The largest
benefits are likely to appear in packet forwarding workloads, where
avoiding early SKB allocation can yield substantial gains.


>>
>> Perhaps we can have the cake and eat it too.
>>
>> We could leave the traits encoding/decoding out of the kernel and, at
>> the same time, *expose it* to the network stack through BPF struct_ops
>> programs. At a high level, for example ->get_rx_hash(), not the
>> individual K/V access. The traits_ops vtable could grow as needed to
>> support new use cases.
>>
>> If you think about it, it's not so different from BPF-powered congestion
>> algorithms and scheduler extensions. They also expose some state, kept in
>> maps, that only the loaded BPF code knows how to operate on.
> 
> Right, the difference being that the kernel works perfectly well without
> an eBPF congestion control algorithm loaded because it has its own
> internal implementation that is used by default.

Good point.

> Having a hard dependency on BPF for in-kernel functionality is a
> different matter, and limits the cases it can be used for.

I agree.

> Besides, I don't really see the point of leaving the encoding out of the
> kernel? We keep the encoding kernel-internal anyway, and just expose a
> get/set API, so there's no constraint on changing it later (that's kinda
> the whole point of doing that). And with bulk get/set there's not an
> efficiency argument either. So what's the point, other than doing things
> in BPF for its own sake?

I agree - we should keep the traits encoding kernel-internal. The traits
area is best understood as a dynamic struct type available to the kernel.

We shouldn't expose the traits format as UAPI to BPF on day one. It's
likely we'll need to adjust key/value sizing or representation as new
use cases arise. BPF programs can still access traits via a get/set API,
like other consumers. Later on, we could consider translating BPF kfunc
calls into inline BPF instructions (when extending BPF with features
like the popcnt instruction).

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ