lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YH6bV9W4lgIl4Fry@hirez.programming.kicks-ass.net>
Date:   Tue, 20 Apr 2021 11:13:59 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Luigi Rizzo <lrizzo@...gle.com>
Cc:     linux-kernel <linux-kernel@...r.kernel.org>, axboe@...nel.dk,
        paulmck@...nel.org
Subject: Re: [PATCH] smp: add a best_effort version of
 smp_call_function_many()

On Mon, Apr 19, 2021 at 11:07:08PM +0200, Luigi Rizzo wrote:
> On Mon, Apr 19, 2021 at 9:17 PM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Mon, Apr 19, 2021 at 11:44:55AM -0700, Luigi Rizzo wrote:
> > > Regardless of the 'wait' argument, smp_call_function_many() must spin
> > > if any of the target CPUs have their csd busy waiting to be processed
> > > for a previous call. This may cause high tail latencies e.g. when some
> > > of the target CPUs are running functions that disable interrupts for a
> > > long time; getrusage() is one possible culprit.
> > >
> > > Here we introduce a variant, __smp_call_function_many(), that adds
> > > a third 'best_effort' mode to the two existing ones (nowait, wait).
> > > In best effort mode, the call will skip CPUs whose csd is busy, and if
> > > any CPU is skipped it returns -EBUSY and the set of busy in the mask.
> > > This allows the caller to decide how to proceed, e.g. it might retry at
> > > a later time, or use a private csd, etc..
> > >
> > > The new function is a compromise to avoid touching existing callers of
> > > smp_call_function_many(). If the feature is considered interesting, we
> > > could even replace the 'wait' argument with a ternary 'mode' in all
> > > smp_call_function_*() and derived methods.
> >
> > I don't see a user of this...
> 
> This is actually something for which I was looking for feedback:
> 
> my use case is similar to a periodic garbage collect request:
> the caller tells targets that it may be time to do some work,
> but it does not matter if the request is dropped because the
> caller knows who was busy and will reissue pending requests later.
> 
> I would expect something like the above could be useful e.g.
> in various kinds of resource manager.
> 
> However, a grep for on_each_cpu_*() and smp_call_function_*()
> mostly returns synchronous calls (wait=1).
> 
> Any possible candidates that people can think of ?

We mostly try and avoid using this stuff wherever possible. Only when
no other choice is left do we send IPIs.

NOHZ_FULL already relies on this and gets massively unhappy when a new
user comes and starts to spray IPIs.

So no; mostly we send an IPI because we _HAVE_ to, not because giggles.

That said; there's still some places left where we can avoid sending
IPIs, but in all those cases correctness mandates we actually handle
things and not randomly not do anything.

For example, look at arch/x86/kernel/alternative.c:text_poke_sync(). The
purpose of that is to ensure all CPUs observe modified *kernel* code.
Now, if that CPU is currently running userspace, it doesn't much care
kernel code is changed, however that does mean it needs to call
sync_core() upon entering kernel, *BEFORE* hitting any code that's
possibly modified (and self modifying code is everywhere today,
ironically also very much in the NOHZ_FULL entry paths).

So untangling all that should be possible, but is something that
requires quite a bit of care and doesn't benefit from anything like the
proposed.

Mostly it sounds like you shouldn't be using IPIs either.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ