[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091118042128.GC23808@google.com>
Date: Tue, 17 Nov 2009 20:21:28 -0800
From: Michel Lespinasse <walken@...gle.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Darren Hart <dvhltc@...ibm.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] futex: add FUTEX_SET_WAIT operation
On Tue, Nov 17, 2009 at 07:24:09AM -0800, Linus Torvalds wrote:
> The FUTEX_SET_WAIT concept seems well-defined, although it sounds more
> like a FUTEX_CMPXCHG_WAIT to me than a "SET" operation. I'm not entirely
> sure that we really want to do the CMPXCHG in the kernel rather than in
> user space, since lock stealing generally isn't a problem, but I don't
> think it's _wrong_ to add this concept.
>
> In fact, CMPXCHG is generally seen to be the "fundamental" base for
> implementing locking, so in that sense it makes perfect sense to have it
> as a FUTEX model.
My first version called the operation that way, but it did *NOT* block if
val2 (now renamed setval) was already set in the futex. Turned out it helps
my use case if I do block in that situation, so I changed the operation
accordingly and renamed it into FUTEX_SET_WAIT (with a CAS model in mind,
though it's still also similar to cmpxchg in that it just returns if
the uval is not 'val' or 'setval').
> That said, I personally think the adaptive wait model is (a) more likely
> to fix many performance issues and (b) a bit more high-level concept, so I
> like Peter's patch too, but I don't see that the patches would really be
> mutually exclusive.
>
> Of course, it's possible that Michel's performance problem is fixed by the
> adaptive approach too, in which case the FUTEX_SET_WAIT (or _CMPXCHG_WAIT)
> patch is just fundamentally less interesting. But some people do need
> fairness - even when it's bad for performance - so...
>
> One thing that does strike me is that _if_ we want to do both interfaces,
> then I would assume that we quite likely also want to have an adaptive
> version of the FUTEX_SET|CMPXCHG_WAIT thing. Which perhaps implies that
> the "ADAPTIVE" part should be a bitflag in the command value?
I like the adaptive approach as well, though I'm not sure yet if it'd work
for us. I can try it but it'll take a bit of time.
One difficulty with adaptive spinning is that we want to avoid deadlocks.
If two threads end up spinning in-kernel waiting for each other, we better
have preemption enabled... or detect and deal with the situation somehow.
Also one aspect I dislike is that this would impose a given format on the
futex for storing the TID. I would prefer if there were several bits available
in the futex for userspace to do whatever they want. 8 bits would likely
be enough, which leaves 24 for the TID - enough for us, but I have no idea
if that's good enough for upstream inclusion. It that's not possible,
one possible compromise could be:
- userspace passes a TID (which it extracted from the futex value; but kernel
does not necessarily know how)
- kernel spins until that TID goes to sleep, or the futex value is not equal
to val or setval anymore
- if val != setval and the futex value is val, set it to setval
- if the futex valus is setval, block, otherwise -EWOULDBLOCK.
If the lock got stolen from a different thread, userspace can decide to
retry with or without adaptive spinning.
That would be the most generic interface I can think of, though it's
starting to be a LOT of parameters - actually, too many to pass through
the _syscall6 interface.
I also like Darren's suggestion to do a FUTEX_SET_WAIT_REQUEUE_PI,
but it's hitting the same 'too many parameters' limitation as well :/
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists