[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whUgeZGcs5YAfZa07BYKNDCNO=xr4wT6JLATJTpX0bjGg@mail.gmail.com>
Date: Tue, 10 Mar 2020 15:31:10 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Jeff Layton <jlayton@...nel.org>
Cc: NeilBrown <neilb@...e.de>, yangerkun <yangerkun@...wei.com>,
kernel test robot <rong.a.chen@...el.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
Bruce Fields <bfields@...ldses.org>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [locks] 6d390e4b5d: will-it-scale.per_process_ops -96.6% regression
On Tue, Mar 10, 2020 at 3:07 PM Jeff Layton <jlayton@...nel.org> wrote:
>
> Given that, and the fact that Neil pointed out that yangerkun's latest
> patch would reintroduce the original race, I'm leaning back toward the
> patch Neil sent yesterday. It relies solely on spinlocks, and so doesn't
> have the subtle memory-ordering requirements of the others.
It has subtle locking changes, though.
It now calls the "->lm_notify()" callback with the wait queue spinlock held.
is that ok? It's not obvious. Those functions take other spinlocks,
and wake up other things. See for example nlmsvc_notify_blocked()..
Yes, it was called under the blocked_lock_lock spinlock before too,
but now there's an _additional_ spinlock, and it must not call
"wake_up(&waiter->fl_wait))" in the callback, for example, because it
already holds the lock on that wait queue.
Maybe that is never done. I don't know the callbacks.
I was really hoping that the simple memory ordering of using that
smp_store_release -> smp_load_acquire using fl_blocker would be
sufficient. That's a particularly simple and efficient ordering.
Oh well. If you want to go that spinlock way, it needs to document why
it's safe to do a callback under it.
Linus
Powered by blists - more mailing lists