[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whSJbODMVmxxDs64f7BaESKWuMqOxWGpjUSDn6Jzqa71g@mail.gmail.com>
Date: Sat, 25 Jul 2020 11:48:41 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Hugh Dickins <hughd@...gle.com>, Michal Hocko <mhocko@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [RFC PATCH] mm: silence soft lockups from unlock_page
On Sat, Jul 25, 2020 at 3:14 AM Oleg Nesterov <oleg@...hat.com> wrote:
>
> Heh. I too thought about this. And just in case, your patch looks correct
> to me. But I can't really comment this behavioural change. Perhaps it
> should come in a separate patch?
We could do that. At the same time, I think both parts change how the
waitqueue works that it might as well just be one "fix page_bit_wait
waitqueue usage".
But let's wait to see what Hugh's numbers say.
> In essense, this partly reverts your commit 3510ca20ece0150
> ("Minor page waitqueue cleanups"). I mean this part:
Well, no. I mean, it keeps the "always add to the fail" behavior.
But some of the reasons for it have gone away. Now we could just make
it go back to always doing non-exclusive waits at the head.
The non-exclusive waiters _used_ to re-insert themselves on the queue
until they saw the bit clear, so waking them up if the bit was just
going to be set again was just going to make for unnecessary
scheduling and waitlist noise.
That reason is gone.
But I think the fundamental fairness issue might still be there. So
I'll keep the simpler "always add at the end".
But you're right - we could expedite the non-exclusive waiters even more.
Linus
Powered by blists - more mailing lists