[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFw1A1C8qUeKPUzACrsqn97UDxTP3M2SRs80aEztfU=Qbg@mail.gmail.com>
Date: Mon, 14 Aug 2017 20:28:19 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Andi Kleen <ak@...ux.intel.com>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>, Kan Liang <kan.liang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk
On Mon, Aug 14, 2017 at 8:15 PM, Andi Kleen <ak@...ux.intel.com> wrote:
> But what should we do when some other (non page) wait queue runs into the
> same problem?
Hopefully the same: root-cause it.
Once you have a test-case, it should generally be fairly simple to do
with profiles, just seeing who the caller is when ttwu() (or whatever
it is that ends up being the most noticeable part of the wakeup chain)
shows up very heavily.
And I think that ends up being true whether the "break up long chains"
patch goes in or not. Even if we end up allowing interrupts in the
middle, a long wait-queue is a problem.
I think the "break up long chains" thing may be the right thing
against actual malicious attacks, but not for any actual real
benchmark or load.
I don't think we normally have cases of long wait-queues, though. At
least not the kinds that cause problems. The real (and valid)
thundering herd cases should already be using exclusive waiters that
only wake up one process at a time.
The page bit-waiting is hopefully special. As mentioned, we used to
have some _really_ special code for it for other reasons, and I
suspect you see this problem with them because we over-simplified it
from being a per-zone dynamically sized one (where the per-zone thing
caused both performance problems and actual bugs) to being that
"static small array".
So I think/hope that just re-introducing some dynamic sizing will help
sufficiently, and that this really is an odd and unusual case.
Linus
Powered by blists - more mailing lists