[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200819072209.GU2674@hirez.programming.kicks-ass.net>
Date: Wed, 19 Aug 2020 09:22:09 +0200
From: peterz@...radead.org
To: Christoph Hellwig <hch@....de>
Cc: mingo@...nel.org, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, will@...nel.org, paulmck@...nel.org,
axboe@...nel.dk, chris@...is-wilson.co.uk, davem@...emloft.net,
kuba@...nel.org, fweisbec@...il.com, oleg@...hat.com,
vincent.guittot@...aro.org
Subject: Re: [RFC][PATCH v2 08/10] smp,irq_work: Use the new irq_work API
On Tue, Aug 18, 2020 at 06:25:42PM +0200, Christoph Hellwig wrote:
> On Tue, Aug 18, 2020 at 12:51:10PM +0200, Peter Zijlstra wrote:
> > Convert the performance sensitive users of
> > smp_call_single_function_async() over to the new
> > irq_work_queue_remote_static().
> >
> > The new API is marginally less crap but taking these users away allows
> > fixing up smp_call_single_function_async() without risk of performance
> > regressions.
>
> You probably want a conversion patch per subsystem so that it sticks
> out. What is so crap about this API? How could we as subsystem
> maintainers help to make it less crappy?
The problem with both the current smp_call_function_single_async() and
the proposed irq_work_queue_remote_static() is that they require
external serialization and lifetime management.
That is, the external serialization comes from the non-atomic
test-and-set they both have. This works nicely when there is external
state that already serializes things, but totally comes apart (and
causes trivial list corruption) when you get it wrong.
The life-time thing is because you can't tell when the IPI is done.
The newly introduced irq_work_queue_remote() suffers neither of these
problems, and patch 9 fixes the first for
smp_call_function_single_async(). The whole smp_call_function*() class
suffers the second issue for .wait=0, typically they get combined with a
completion or some other state when/where it matters.
Patch 9 also shows why I introduced irq_work_queue_remote_static(), the
additional atomic op on enqueue is of course not cheap, and I can
imagine a bunch of users that don't really need it not wanting to pay
that price.
>From a user pov (I'm one too), I'm not sure what we can do about this,
other than possibly accept the extra overhead :/
I do have a TODO item to see if I can come up with extra debugging
checks to catch abuse of these fragile things. One possible thing is to
have csd_unlock() also set csd->next = NULL, and have the llist_add()
users verify csd->next == NULL before doing the add.
Powered by blists - more mailing lists