[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51A0C166.4000503@redhat.com>
Date: Sat, 25 May 2013 09:49:26 -0400
From: Rik van Riel <riel@...hat.com>
To: Manfred Spraul <manfred@...orfullife.com>
CC: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.com>,
Davidlohr Bueso <davidlohr.bueso@...com>, hhuang@...hat.com,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] ipc/sem.c: fix lockup, restore FIFO behavior
On 05/25/2013 04:54 AM, Manfred Spraul wrote:
> Hi Rik,
>
> I came up with a completely different approach:
>
> The patch
> a) fixes a lockup due to a missing restart.
> b) makes the code again FIFO.
>
> Changes:
> - the wait-for-zero operations are moved into seperate lists. Thus they can
> be checked seperately, without rescanning the whole queue.
> - If a complex operating arrives, then all pending change operations are
> moved into the global queue. This allows to keep everything FIFO.
>
> Advantage:
> - Fewer restarts in update_queue(), because pending wait-for-zero do not
> force a restart anymore.
> - Efficient handling of wait-for-zero semops, both simple and complex.
> - FIFO. Dropping FIFO is a user visible change, and I'm a coward.
> - simpler check_restart logic.
>
> Disadvantage:
> When one complex operation arrives, then the semaphore array goes into a
> complex_present mode that always acquires the global lock. Even when the
> complex operations have completed, pending simple decrease operations
> prevent the array from switching back. The switch happens when
> there are only simple wait-for-zero semops (or no semops at all).
>
> But: Let's wait if this really exists: An application that does rarely
> complex operations (and that doesn't prefer FIFO semantics).
I do not like that downside at all.
The danger of staying in "too slow to be useful" mode forever
is really not a risk I want to take.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists