[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200703131302.45216.dada1@cosmosbay.com>
Date: Tue, 13 Mar 2007 13:02:44 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Andrea Arcangeli <andrea@...e.de>
Cc: Nick Piggin <nickpiggin@...oo.com.au>,
Anton Blanchard <anton@...ba.org>,
Rik van Riel <riel@...hat.com>,
Lorenzo Allegrucci <l_allegrucci@...oo.it>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Suparna Bhattacharya <suparna@...ibm.com>,
Jens Axboe <jens.axboe@...cle.com>
Subject: Re: SMP performance degradation with sysbench
On Tuesday 13 March 2007 12:42, Andrea Arcangeli wrote:
> My wild guess is that they're allocating memory after taking
> futexes. If they do, something like this will happen:
>
> taskA taskB taskC
> user lock
> mmap_sem lock
> mmap sem -> schedule
> user lock -> schedule
>
> If taskB wouldn't be there triggering more random trashing over the
> mmap_sem, the lock holder wouldn't wait and task C wouldn't wait too.
>
> I suspect the real fix is not to allocate memory or to run other
> expensive syscalls that can block inside the futex critical sections...
glibc malloc uses arenas, and trylock() only. It should not block because if
an arena is already locked, thread automatically chose another arena, and
might create a new one if necessary.
But yes, mmap_sem contention is a big problem, because it's also taken by
futex code (unfortunately)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists