lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8a5d5b162a1462568c4d342c93896c919950cbe9@linux.dev>
Date: Tue, 29 Apr 2025 05:23:14 +0000
From: "Jiayuan Chen" <jiayuan.chen@...ux.dev>
To: "Cong Wang" <xiyou.wangcong@...il.com>
Cc: bpf@...r.kernel.org, mrpre@....com, "Alexei Starovoitov"
 <ast@...nel.org>, "Daniel Borkmann" <daniel@...earbox.net>, "Andrii
 Nakryiko" <andrii@...nel.org>, "Martin KaFai Lau" <martin.lau@...ux.dev>,
 "Eduard Zingerman" <eddyz87@...il.com>, "Song Liu" <song@...nel.org>,
 "Yonghong Song" <yonghong.song@...ux.dev>, "John Fastabend"
 <john.fastabend@...il.com>, "KP Singh" <kpsingh@...nel.org>, "Stanislav
 Fomichev" <sdf@...ichev.me>, "Hao Luo" <haoluo@...gle.com>, "Jiri Olsa"
 <jolsa@...nel.org>, "Jonathan Corbet" <corbet@....net>, "Jakub Sitnicki"
 <jakub@...udflare.com>, "David S. Miller" <davem@...emloft.net>, "Eric
 Dumazet" <edumazet@...gle.com>, "Jakub Kicinski" <kuba@...nel.org>,
 "Paolo Abeni" <pabeni@...hat.com>, "Simon Horman" <horms@...nel.org>,
 "Kuniyuki Iwashima" <kuniyu@...zon.com>, "Willem de Bruijn"
 <willemb@...gle.com>, "Mykola Lysenko" <mykolal@...com>, "Shuah Khan"
 <shuah@...nel.org>, "Jiapeng Chong" <jiapeng.chong@...ux.alibaba.com>,
 linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
 netdev@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH bpf-next v1 1/3] bpf, sockmap: Introduce a new kfunc for
 sockmap

2025/4/29 08:56, "Cong Wang" <xiyou.wangcong@...il.com> wrote:

> 
> On Mon, Apr 28, 2025 at 04:16:52PM +0800, Jiayuan Chen wrote:
> 
> > 
> > +bpf_sk_skb_set_redirect_cpu()
> > 
> >  +^^^^^^^^^^^^^^^^^^^^^^
> > 
> >  +.. code-block:: c
> > 
> >  +
> > 
> >  + int bpf_sk_skb_set_redirect_cpu(struct __sk_buff *s, int redir_cpu)
> > 
> >  +
> > 
> >  +This kfunc ``bpf_sk_skb_set_redirect_cpu()`` is available to
> > 
> >  +``BPF_PROG_TYPE_SK_SKB`` BPF programs. It sets the CPU affinity, allowing the
> > 
> >  +sockmap packet redirecting process to run on the specified CPU as much as
> > 
> >  +possible, helping users reduce the interference between the sockmap redirecting
> > 
> >  +background thread and other threads.
> > 
> >  +
> > 
> 
> I am wondering if it is a better idea to use BPF_MAP_TYPE_CPUMAP for
> 
> redirection here instead? Like we did for bpf_redirect_map(). At least
> 
> we would not need to store CPU in psock with this approach.
> 
> Thanks.
>

You mean to use BPF_MAP_TYPE_CPUMAP with XDP to redirect packets to a
specific CPU?

I tested and found such overhead:
1、Needing to parse the L4 header from the L2 header to obtain the 5-tuple,
  and then maintaining an additional map to store the relationship between
  each five-tuple and process/CPU. Compared to multi-process scenario, with
  one process binding to one CPU and one map, I can directly use a global
  variable to let the BPF program know which thread it should use, especially
  for programs that enable reuseport.


2、Furthermore, regarding performance, I tested with cpumap and the results
   were lower than expected. This is because loopback only has xdp_generic
   mode and the problem I described in cover letter is actually occurred
   on loopback...

Code:
'''
struct {
      __uint(type, BPF_MAP_TYPE_CPUMAP);
      __uint(key_size, sizeof(__u32));
      __uint(value_size, sizeof(struct bpf_cpumap_val));
      __uint(max_entries, 64);
} cpu_map SEC(".maps");


SEC("xdp")
int  xdp_stats1_func(struct xdp_md *ctx)
{
      /* Real world:
       * 1. get 5-tuple from ctx
       * 2. get corresponding cpu of current skb through XX_MAP
       */
      int ret = bpf_redirect_map(&cpu_map, 3, 0); // redirct to 3
      return ret;
}
'''

Result:
'''
./bench sockmap -c 2 -p 1 -a --rx-verdict-ingress --no-verify
Setting up benchmark 'sockmap'...
create socket fd c1:14 p1:15 c2:16 p2:17
Benchmark 'sockmap' started.
Iter   0 ( 36.439us): Send Speed  561.496 MB/s ... Rcv Speed   33.264 MB/s
Iter   1 ( -7.448us): Send Speed  558.443 MB/s ... Rcv Speed   32.611 MB/s
Iter   2 ( -2.245us): Send Speed  557.131 MB/s ... Rcv Speed   33.004 MB/s
Iter   3 ( -2.845us): Send Speed  547.374 MB/s ... Rcv Speed   33.331 MB/s
Iter   4 (  0.745us): Send Speed  562.891 MB/s ... Rcv Speed   34.117 MB/s
Iter   5 ( -2.056us): Send Speed  560.994 MB/s ... Rcv Speed   33.069 MB/s
Iter   6 (  5.343us): Send Speed  562.038 MB/s ... Rcv Speed   33.200 MB/s
'''

Instead, we can introduce a new kfunc to specify the CPU used by the
backlog running thread, which can avoid using XDP. After all, this is a
"problem" brought by the BPF L7 framework itself, and it's better to solve
it ourselves.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ