lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Jan 2009 08:57:50 +0100
From:	Mike Galbraith <efault@....de>
To:	Thomas Pilarski <thomas.pi@...or.de>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Gregory Haskins <ghaskins@...ell.com>,
	bugme-daemon@...zilla.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Bugme-new] [Bug 12562] New: High overhead while switching or
	synchronizing threads on different cores

On Thu, 2009-01-29 at 15:05 +0100, Thomas Pilarski wrote:
> > In short this program is carefully crafted to defeat all our affinity
> > tests - and I'm not sure what to do.
> 
> I am sorry, although it is not carefully crafted. The function random()
> is causing my problem. I currently have no real data, so I tried to make
> some random utilization and data.

Yeah, rather big difference, mega-contention vs zero-contention.

2.6.28.2, profile of ThreadSchedulingIssue 4 524288 8 200

vma              samples  %        app name                 symbol name
ffffffff80251efa 2574819  31.6774  vmlinux                  futex_wake
ffffffff80251a39 1367613  16.8255  vmlinux                  futex_wait
0000000000411790 815426   10.0320  ThreadSchedulingIssue    random
ffffffff8022b3b5 343692    4.2284  vmlinux                  task_rq_lock
0000000000404e30 299316    3.6824  ThreadSchedulingIssue    __lll_lock_wait_private
ffffffff8030d430 262906    3.2345  vmlinux                  copy_user_generic_string
ffffffff80462af2 235176    2.8933  vmlinux                  schedule
0000000000411b90 210984    2.5957  ThreadSchedulingIssue    random_r
ffffffff80251730 129376    1.5917  vmlinux                  hash_futex
ffffffff8020be10 123548    1.5200  vmlinux                  system_call
ffffffff8020a679 119398    1.4689  vmlinux                  __switch_to
ffffffff8022f49b 110068    1.3541  vmlinux                  try_to_wake_up
ffffffff8024c4d1 106352    1.3084  vmlinux                  sched_clock_cpu
ffffffff8020be20 102709    1.2636  vmlinux                  system_call_after_swapgs
ffffffff80229a2d 100614    1.2378  vmlinux                  update_curr
ffffffff80248309 86475     1.0639  vmlinux                  add_wait_queue
ffffffff80253149 85969     1.0577  vmlinux                  do_futex

Versus using myrand() free sample cruft generator from rand(3) manpage.  Poof.

vma      samples  %        app name                 symbol name
004002f4 979506   90.7113  ThreadSchedulingIssue    myrand
00400b00 53348     4.9405  ThreadSchedulingIssue    thread_consumer
00400c25 42710     3.9553  ThreadSchedulingIssue    thread_producer

One of those "don't _ever_ do that" things?

	-Mike


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ