[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFx3vx+A1nmbUikDv1ddy-wJNGYzGDjSaiQz0BoXjLEEzA@mail.gmail.com>
Date: Wed, 9 Apr 2014 08:12:41 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Jan Stancek <jstancek@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Davidlohr Bueso <davidlohr@...com>,
Ingo Molnar <mingo@...nel.org>,
Larry Woodman <lwoodman@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Darren Hart <dvhart@...ux.intel.com>
Subject: Re: [PATCH] futex: avoid race between requeue and wake
On Wed, Apr 9, 2014 at 4:46 AM, Jan Stancek <jstancek@...hat.com> wrote:
>
>
> I'm running reproducer with this patch applied on 3 systems:
> - two s390x systems where this can be reproduced within seconds
> - x86_64 Intel(R) Xeon(R) CPU E5240 @ 3.00GHz, where I could
> reproduce it on average in ~3 minutes.
>
> It's running without failure over 4 hours now.
Ok. I committed my second patch.
It might be possible to avoid the two extra atomics by simply not
incrementing the target hash queue waiters count (again) in
requeue_futex() the first time we hit that case, and then avoiding the
final decrement too. But that is actually fairly complicated because
we might be requeuing multiple entries (or fail to requeue any at
all). We do have all that "drop_count" logic, so it's certainly quite
possible, but it gets complex and we'd need to be crazy careful and
pass in the state to everybody involved. So it isn't something I'm
personally willing to do. But if somebody cares, there's a slight
optimization opportunity in this whole futex_requeue() situation wrt
the waiter count increment/decrement thing.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists