lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Wed, 23 Nov 2011 15:52:43 -0500
From:	Jérôme Carretero <cJ-ko@...gloub.eu>
To:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Cc:	tglx@...utronix.de
Subject: Q: Process creation and soft hot CPU affinity

Hi,

I noticed something this night.
The process executions during a ./configure get spread among all the machine CPUs.
When launching processes sequentially, why aren't they put to run on the same CPU ?
I naively assume that the CPU has just finished its work and it's "hot".
The others could continue resting in their C-states/P-states, or whatever.

To measure the performance impact of the current scheduling choices, I ran a little benchmark.
I ran a time ./configure with/without cgroup CPU affinity and got significant difference.

benchmark_setup() {
  cgrp=1cpu
  ncpus=8
  cgroup_mnt=/sys/fs/cgroup
  coreutils_tar=/var/paludis/distfiles/coreutils-8.13.tar.xz

  mkdir -p $cgroup_mnt/$cgrp
  echo 0 > $cgroup_mnt/$cgrp/cpuset.mems
  echo $$ > $cgroup_mnt/$cgrp/tasks

  cd /dev/shm
}

benchmark() {
  tar xf $coreutils_tar

  pushd coreutils* > /dev/null
  echo $* > $cgroup_mnt/$cgrp/cpuset.cpus
  echo 3 > /proc/sys/vm/drop_caches
  time sh ./configure > /dev/null
  popd > /dev/null

  rm -rf coreutils*
}

benchmark 0
benchmark $(cat $cgroup_mnt/cpuset.cpus)

Results:

with affinity to 1CPU:
real    0m40.229s
user    0m15.222s
sys     0m9.409s

with affinity to all CPUs:
real    1m20.832s
user    0m31.089s
sys     0m37.582s

Is there something that can be done ?
I just want to start a discussion on this matter, perhaps I'll play with the scheduler if I get a few hints.

Regards,

-- 
cJ
3.2.0-rc2-Bidule-00400-g866d43c #1 SMP PREEMPT Tue Nov 22 13:51:00 EST 2011 x86_64
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ