[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150420061836.GA11191@gmail.com>
Date: Mon, 20 Apr 2015 08:18:36 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Chris Mason <clm@...com>, Steven Rostedt <rostedt@...dmis.org>,
fredrik.markstrom@...driver.com, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/2] futex: lockless wakeups
* Davidlohr Bueso <dave@...olabs.net> wrote:
> Given the overall futex architecture, any chance of reducing
> hb->lock contention is welcome. In this particular case, using
> wake-queues to enable lockless wakeups addresses very much real
> world performance concerns, even cases of soft-lockups in cases
> of large amounts of blocked tasks (which is not hard to find in
> large boxes, using but just a handful of futex).
>
> At the lowest level, this patch can reduce latency of a single thread
> attempting to acquire hb->lock in highly contended scenarios by a
> up to 2x. At lower counts of nr_wake there are no regressions,
> confirming, of course, that the wake_q handling overhead is practically
> non existent. For instance, while a fair amount of variation,
> the extended pef-bench wakeup benchmark shows for a 20 core machine
> the following avg per-thread time to wakeup its share of tasks:
>
> nr_thr ms-before ms-after
> 16 0.0590 0.0215
> 32 0.0396 0.0220
> 48 0.0417 0.0182
> 64 0.0536 0.0236
> 80 0.0414 0.0097
> 96 0.0672 0.0152
>
> Naturally, this can cause spurious wakeups. [...]
Please write a small description we can cite to driver authors once
the (inevitable) breakages appear, outlining this new behavior and its
implications, so that we can fix any remaining bugs ASAP.
I'll also let this pending a bit longer than other changes, to make
sure we shake out any bugs/regressions triggered by this change.
Third, it might make sense to add a new 'spurious wakeup injection
debug mechanism' that, if enabled in the .config, automatically and
continuously inserts spurious wakeups at a given, slightly randomized
rate - which would ensure that all kernel facilities can robustly
handle spurious wakeups.
My guess would be that most remaining fragilities against spurious
wakeups ought to be in the boot/init phase, so I'd keep an eye out for
suspend/resume regressions.
> [...] However there is core code that cannot handle them afaict, and
> furthermore tglx does have the point that other events can already
> trigger them anyway.
s/there is core code/there is no core code
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists