lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070313114215.GI8992@v2.random>
Date:	Tue, 13 Mar 2007 12:42:15 +0100
From:	Andrea Arcangeli <andrea@...e.de>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Anton Blanchard <anton@...ba.org>, Rik van Riel <riel@...hat.com>,
	Lorenzo Allegrucci <l_allegrucci@...oo.it>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Suparna Bhattacharya <suparna@...ibm.com>,
	Jens Axboe <jens.axboe@...cle.com>
Subject: Re: SMP performance degradation with sysbench

On Tue, Mar 13, 2007 at 10:12:19PM +1100, Nick Piggin wrote:
> They'll be sleeping in futex_wait in the kernel, I think. One thread
> will hold the critical mutex, some will be off doing their own thing,
> but importantly there will be many sleeping for the mutex to become
> available.

The initial assumption was that there was zero idle time with threads
= cpus and the idle time showed up only when the number of threads
increased to the double the number of cpus. If the idle time wouldn't
increase with the number of threads, nothing would be suspect.

> However, I tested with a bigger system and actually the idle time
> comes before we saturate all CPUs. Also, increasing the aggressiveness
> of the load balancer did not drop idle time at all, so it is not a case
> of some runqueues idle while others have many threads on them.

It'd be interesting to see the sysrq+t after the idle time
increased.

> I guess googlemalloc (tcmalloc?) isn't suitable for a general purpose
> glibc allocator. But I wonder if there are other improvements that glibc
> can do here?

My wild guess is that they're allocating memory after taking
futexes. If they do, something like this will happen:

     taskA		taskB		taskC
     user lock
			mmap_sem lock
     mmap sem -> schedule
					user lock -> schedule

If taskB wouldn't be there triggering more random trashing over the
mmap_sem, the lock holder wouldn't wait and task C wouldn't wait too.

I suspect the real fix is not to allocate memory or to run other
expensive syscalls that can block inside the futex critical sections...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ