lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 13 Mar 2007 21:06:14 +1100 From: Nick Piggin <nickpiggin@...oo.com.au> To: Andrea Arcangeli <andrea@...e.de> CC: Anton Blanchard <anton@...ba.org>, Rik van Riel <riel@...hat.com>, Lorenzo Allegrucci <l_allegrucci@...oo.it>, linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>, Suparna Bhattacharya <suparna@...ibm.com>, Jens Axboe <jens.axboe@...cle.com> Subject: Re: SMP performance degradation with sysbench Andrea Arcangeli wrote: > On Tue, Mar 13, 2007 at 04:11:02PM +1100, Nick Piggin wrote: > >>Hi Anton, >> >>Very cool. Yeah I had come to the conclusion that it wasn't a kernel >>issue, and basically was afraid to look into userspace ;) > > > btw, regardless of what glibc is doing, still the cpu shouldn't go > idle IMHO. Even if we're overscheduling and trashing over the mmap_sem > with threads (no idea if other OS schedules the task away when they > find the other cpu in the mmap critical section), or if we've > overscheduling with futex locking, the cpu usage should remain 100% > system time in the worst case. The only explanation for going idle > legitimately could be on HT cpus where HT may hurt more than help but > on real multicore it shouldn't happen. > Well ignoring the HT issue, I was seeing lots of idle time simply because userspace could not keep up enough load to the scheduler. There simply were fewer runnable tasks than CPU cores. But it wasn't a case of all CPUs going idle, just most of them ;) -- SUSE Labs, Novell Inc. Send instant messages to your online friends http://au.messenger.yahoo.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists