[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMOZA0J4q2XSmhxB_uc-Q2mOA=K_FcvZqy3G+8ME=QMicOF=Lg@mail.gmail.com>
Date: Tue, 20 Apr 2021 12:41:08 +0200
From: Luigi Rizzo <lrizzo@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel <linux-kernel@...r.kernel.org>, axboe@...nel.dk,
paulmck@...nel.org
Subject: Re: [PATCH] smp: add a best_effort version of smp_call_function_many()
On Tue, Apr 20, 2021 at 11:14 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Mon, Apr 19, 2021 at 11:07:08PM +0200, Luigi Rizzo wrote:
> > On Mon, Apr 19, 2021 at 9:17 PM Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > > On Mon, Apr 19, 2021 at 11:44:55AM -0700, Luigi Rizzo wrote:
> > > > Regardless of the 'wait' argument, smp_call_function_many() must spin
> > > > if any of the target CPUs have their csd busy waiting to be processed
> > > > for a previous call. This may cause high tail latencies e.g. when some
> > > > of the target CPUs are running functions that disable interrupts for a
> > > > long time; getrusage() is one possible culprit.
> > > >
> > > > Here we introduce a variant, __smp_call_function_many(), that adds
> > > > a third 'best_effort' mode to the two existing ones (nowait, wait).
> > > > In best effort mode, the call will skip CPUs whose csd is busy, and if
> > > > any CPU is skipped it returns -EBUSY and the set of busy in the mask.
> > > > This allows the caller to decide how to proceed, e.g. it might retry at
> > > > a later time, or use a private csd, etc..
> > > >
> > > > The new function is a compromise to avoid touching existing callers of
> > > > smp_call_function_many(). If the feature is considered interesting, we
> > > > could even replace the 'wait' argument with a ternary 'mode' in all
> > > > smp_call_function_*() and derived methods.
> > >
> > > I don't see a user of this...
> >
> > This is actually something for which I was looking for feedback:
> >
> > my use case is similar to a periodic garbage collect request:
> > the caller tells targets that it may be time to do some work,
> > but it does not matter if the request is dropped because the
> > caller knows who was busy and will reissue pending requests later.
...
> > Any possible candidates that people can think of ?
>
> We mostly try and avoid using this stuff wherever possible. Only when
> no other choice is left do we send IPIs.
>
> NOHZ_FULL already relies on this and gets massively unhappy when a new
> user comes and starts to spray IPIs.
I am curious, why is that -- is it because the new user is stealing
the shared csd's in cfd_data (see below), or some other reason ?
>
> So no; mostly we send an IPI because we _HAVE_ to, not because giggles.
>
> That said; there's still some places left where we can avoid sending
> IPIs, but in all those cases correctness mandates we actually handle
> things and not randomly not do anything.
My case too requires that the request is eventually handled, but with
this non-blocking IPI the caller has a better option than blocking:
it can either retry the multicast IPI at a later time if conditions allow,
or it can post a dedicated CSD (with the advantage that being my
requests idempotent, if the CSD is locked there is no need to retry
because it means the handler has not started yet).
In fact, if we had the option to use dedicated CSDs for multicast IPI,
we wouldn't even need to retry because we'd know that the posted CSD
is for our call back and not someone else's.
cheers
luigi
Powered by blists - more mailing lists