lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240621190558.409d778c@kernel.org>
Date: Fri, 21 Jun 2024 19:05:58 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org, "David S. Miller"
 <davem@...emloft.net>, Daniel Bristot de Oliveira <bristot@...nel.org>,
 Boqun Feng <boqun.feng@...il.com>, Daniel Borkmann <daniel@...earbox.net>,
 Eric Dumazet <edumazet@...gle.com>, Frederic Weisbecker
 <frederic@...nel.org>, Ingo Molnar <mingo@...hat.com>, Paolo Abeni
 <pabeni@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner
 <tglx@...utronix.de>, Waiman Long <longman@...hat.com>, Will Deacon
 <will@...nel.org>, Björn Töpel <bjorn@...nel.org>,
 Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko <andrii@...nel.org>,
 Eduard Zingerman <eddyz87@...il.com>, Hao Luo <haoluo@...gle.com>, Jesper
 Dangaard Brouer <hawk@...nel.org>, Jiri Olsa <jolsa@...nel.org>, John
 Fastabend <john.fastabend@...il.com>, Jonathan Lemon
 <jonathan.lemon@...il.com>, KP Singh <kpsingh@...nel.org>, Maciej
 Fijalkowski <maciej.fijalkowski@...el.com>, Magnus Karlsson
 <magnus.karlsson@...el.com>, Martin KaFai Lau <martin.lau@...ux.dev>, Song
 Liu <song@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Toke
 Høiland-Jørgensen <toke@...hat.com>, Yonghong Song
 <yonghong.song@...ux.dev>, bpf@...r.kernel.org
Subject: Re: [PATCH v9 net-next 15/15] net: Move per-CPU flush-lists to
 bpf_net_context on PREEMPT_RT.

On Thu, 20 Jun 2024 15:22:05 +0200 Sebastian Andrzej Siewior wrote:
>  void __cpu_map_flush(void)
>  {
> -	struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
> +	struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list();
>  	struct xdp_bulk_queue *bq, *tmp;
>  
>  	list_for_each_entry_safe(bq, tmp, flush_list, flush_node) {

Most of the time we'll init the flush list just to walk its (empty)
self. It feels really tempting to check the init flag inside
xdp_do_flush() already. Since the various sub-flush handles may not get
inlined - we could save ourselves not only the pointless init, but
also the function calls. So the code would potentially be faster than
before the changes?

Can be a follow up, obviously.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ