[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BF02402.8060204@colorfullife.com>
Date: Sun, 16 May 2010 18:57:38 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: Nick Piggin <npiggin@...e.de>, Chris Mason <chris.mason@...cle.com>
CC: zach.brown@...cle.com, jens.axboe@...cle.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] ipc semaphores: reduce ipc_lock contention in semtimedop
On 04/13/2010 08:57 PM, Nick Piggin wrote:
> On Tue, Apr 13, 2010 at 02:19:37PM -0400, Chris Mason wrote:
>
>> I don't see anything in the docs about the FIFO order. I could add an
>> extra sort on sequence number pretty easily, but is the starvation case
>> really that bad?
>>
> Yes, because it's not just a theoretical livelock, it can be basically
> a certainty, given the right pattern of semops.
>
> You could have two mostly-independent groups of processes, each taking
> and releasing a different sem, which are always contended (eg. if it is
> being used for a producer-consumer type situation, or even just mutual
> exclusion with high contention).
>
> Then you could have some overall management process for example which
> tries to take both sems. It will never get it.
>
>
The management process won't get the sem on Linux either:
Linux implements FIFO, but there is no protection at all against starvation.
If I understand the benchmark numbers correctly, a 4-core, 2 GHz Phenom
is able to do ~ 2 million semaphore operations per second in one
semaphore array.
That's the limit - cache line trashing on the sma structure prevent
higher numbers.
For a NUMA system, the limit is probably lower.
Chris:
Do you have an estimate how many semop() your app will perform in one array?
Perhaps we should really remove the per-array list, sma->sem_perm.lock
and sma->sem_otime.
--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists