lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 30 Nov 2023 14:00:48 -0800
From: Martin KaFai Lau <martin.lau@...ux.dev>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Yan Zhai <yan@...udflare.com>, Stanislav Fomichev <sdf@...gle.com>,
 Netdev <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
 Alexei Starovoitov <ast@...nel.org>, kernel-team
 <kernel-team@...udflare.com>, Jakub Kicinski <kuba@...nel.org>,
 Paolo Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>,
 "David S. Miller" <davem@...emloft.net>,
 Jakub Sitnicki <jakub@...udflare.com>, Daniel Borkmann
 <daniel@...earbox.net>, Toke Høiland-Jørgensen
 <toke@...hat.com>, Edward Cree <ecree.xilinx@...il.com>
Subject: Re: Does skb_metadata_differs really need to stop GRO aggregation?

On 11/30/23 12:35 PM, Jesper Dangaard Brouer wrote:
> I should explain our use-case(s) a bit more.
> We do want the information to survive XDP_PASS into the SKB.
> Its the hole point, as we want to transfer information from XDP layer to
> TC-layer and perhaps further all the way to BPF socket filters (I even
> heard someone asked for).
> 
> I'm trying to get an overview, as I now have multiple product teams that
> want to store information across/into differ layer, and they have other
> teams that consume this information.
> 
> We are exploring more options than only XDP metadata area to store
> information.  I have suggested that once an SKB have a socket
> associated, then we can switch into using BPF local socket storage
> tricks. (The lifetime of XDP metadata is not 100% clear as e.g.
> pskb_expand_head clears it via skb_metadata_clear).
> All ideas are welcome, e.g. I'm also looking at ability to store
> auxiliary/metadata data associated with a dst_entry. And SKB->mark is
> already used for other use-cases and isn't big enough. (and then there
> is fun crossing a netns boundry).
> 
> Let me explain *one* of the concrete use-cases.  As described in [1],
> the CF XDP L4 load-balancer Unimog have been extended to a product
> called Plurimog that does load-balancing across data-centers "colo's".
> When Plurimog redirects to another colo, the original "landing" colo's
> ID is carried across (in some encap header) to a Unimog instance.  Thus,
> the original landing Colo ID is known to Unimog running in another colo,
> but that header is popped, so this info need to be transferred somehow.
> I'm told that even the webserver/Nginx need to know the orig/foreign
> landing colo ID (here there should be socket associated). For TCP SYN
> packets, the layered DOS protecting also need to know foreign landing
> colo ID. Other teams/products needs this for accounting, e.g. Traffic
> Manager[1], Radar[2] and Capacity planning.

We also bumped into a usecase about plumbing the RX timestamp taken at XDP to 
its final "sk" for analysis purpose. The usecase had not materialized.

fwiw, one of my thoughts at that time is similar to your sk local storage 
thinking, do a bpf_sk_lookup_tcp at xdp and store the stats there. It will waste 
the lookup effort because there is no skb to do the bpf_sk_assign(). Then follow 
this direction of thought is to allocate a skb in the xdp prog itself if we know 
it is a XDP_PASS.

That said, the sk storage approach would work fine if whatever it wants to 
collect from xdp_md(s)/skb(s) can be stacked/aggregated in a sk. It would be 
nicer if the __sk_buff->data_meta can work more like other bpf local storage 
(sk, task, cgroup...etc) such that it will be available to other bpf prog type 
(e.g. a tracing prog).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ