lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-ESIogCNDiHz4NG@gmail.com>
Date: Mon, 24 Mar 2025 09:04:50 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
	Borislav Petkov <bp@...en8.de>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	"H . Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Alexei Starovoitov <ast@...nel.org>,
	Daniel Borkmann <daniel@...earbox.net>,
	Masami Hiramatsu <mhiramat@...nel.org>, x86@...nel.org,
	bpf@...r.kernel.org, Eric Dumazet <eric.dumazet@...il.com>,
	Greg Thelen <gthelen@...gle.com>,
	Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH] x86/alternatives: remove false sharing in
 poke_int3_handler()


* Eric Dumazet <edumazet@...gle.com> wrote:

> On Mon, Mar 24, 2025 at 8:47 AM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Mon, Mar 24, 2025 at 8:16 AM Ingo Molnar <mingo@...nel.org> wrote:
> > >
> > >
> > > * Eric Dumazet <edumazet@...gle.com> wrote:
> > >
> > > > > What's the adversarial workload here? Spamming bpf_stats_enabled on all
> > > > > CPUs in parallel? Or mixing it with some other text_poke_bp_batch()
> > > > > user if bpf_stats_enabled serializes access?
> > >             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > >
> > > > > Does anything undesirable happen in that case?
> > > >
> > > > The case of multiple threads trying to flip bpf_stats_enabled is
> > > > handled by bpf_stats_enabled_mutex.
> > >
> > > So my suggested workload wasn't adversarial enough due to
> > > bpf_stats_enabled_mutex: how about some other workload that doesn't
> > > serialize access to text_poke_bp_batch()?
> >
> > Do you have a specific case in mind that I can test on these big platforms ?
> >
> > text_poke_bp_batch() calls themselves are serialized by text_mutex, it
> > is not clear what you are looking for.
> 
> 
> BTW the atomic_cond_read_acquire() part is never called even during my
> stress test.

Yeah, that code threw me off - can it really happen with text_mutex 
serializing all of it?

> @@ -2418,7 +2418,7 @@ static void text_poke_bp_batch(struct
> text_poke_loc *tp, unsigned int nr_entries
>         for_each_possible_cpu(i) {
>                 atomic_t *refs = per_cpu_ptr(&bp_refs, i);
> 
> -               if (!atomic_dec_and_test(refs))
> +               if (unlikely(!atomic_dec_and_test(refs)))
>                         atomic_cond_read_acquire(refs, !VAL);

If it could never happen then this should that condition be a 
WARN_ON_ONCE() perhaps?

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ