lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 16 Nov 2010 20:42:32 +0100
From:	Markus Trippelsdorf <markus@...ppelsdorf.de>
To:	Lennart Poettering <mzxreary@...inter.de>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Dhaval Giani <dhaval.giani@...il.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mike Galbraith <efault@....de>,
	Vivek Goyal <vgoyal@...hat.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Ingo Molnar <mingo@...e.hu>,
	LKML <linux-kernel@...r.kernel.org>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [RFC/RFT PATCH v3] sched: automated per tty task groups

On 2010.11.16 at 19:16 +0100, Lennart Poettering wrote:
> On Tue, 16.11.10 09:11, Linus Torvalds (torvalds@...ux-foundation.org) wrote:
> 
> > 
> > On Tue, Nov 16, 2010 at 9:03 AM, Lennart Poettering
> > <mzxreary@...inter.de> wrote:
> > >
> > > Binding something like this to TTYs is just backwards.
> > 
> > Numbers talk, bullshit walks.
> > 
> > The numbers have been quoted. The clear interactive behavior has been seen.
> 
> Here's my super-complex patch btw, to achieve exactly the same thing
> from userspace without involving any kernel or systemd patching and
> kernel-side logic. Simply edit your own ~/.bashrc and add this to the end:
> 
>   if [ "$PS1" ] ; then  
>           mkdir -m 0700 /sys/fs/cgroup/cpu/user/$$
>           echo $$ > /sys/fs/cgroup/cpu/user/$$/tasks
>   fi
> 
> Then, as the superuser do this:
> 
>   mount -t cgroup cgroup /sys/fs/cgroup/cpu -o cpu
>   mkdir -m 0777 /sys/fs/cgroup/cpu/user
> 
> Done. Same effect. However: not crazy.
> 
> I am not sure I myself will find the time to prep some 'numbers' for
> you. They'd be the same as with the kernel patch anyway. But I am sure
> somebody else will do it for you...

OK, I've done some tests and the result is that Lennart's approach seems
to work best. It also _feels_ better interactively compared to the
vanilla kernel and in-kernel cgrougs on my machine. Also it's really
nice to have an interface to actually see what is going on. With the
kernel patch you're totally in the dark about what is going on right
now.

Here are some numbers all recorded while running a make -j4 job in one
shell.

perf sched record sleep 30
perf trace -s /usr/libexec/perf-core/scripts/perl/wakeup-latency.pl :

vanilla kernel without cgroups:
total_wakeups: 44306
avg_wakeup_latency (ns): 36784
min_wakeup_latency (ns): 0
max_wakeup_latency (ns): 9378852

with in-kernel patch:
total_wakeups: 43836
avg_wakeup_latency (ns): 67607
min_wakeup_latency (ns): 0
max_wakeup_latency (ns): 8983036

with Lennart's approach:
total_wakeups: 51070
avg_wakeup_latency (ns): 29047
min_wakeup_latency (ns): 0
max_wakeup_latency (ns): 10008237
 
perf record -a -e sched:sched_switch -e sched:sched_wakeup sleep 10
perf trace -s /usr/libexec/perf-core/scripts/perl/wakeup-latency.pl :

without cgroups:
total_wakeups: 13195
avg_wakeup_latency (ns): 48484
min_wakeup_latency (ns): 0
max_wakeup_latency (ns): 8722497

with in-kernel approach:
total_wakeups: 14106
avg_wakeup_latency (ns): 92532
min_wakeup_latency (ns): 20
max_wakeup_latency (ns): 5642393

Lennart's approach:
total_wakeups: 22215
avg_wakeup_latency (ns): 24118
min_wakeup_latency (ns): 0
max_wakeup_latency (ns): 8001142
-- 
Markus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ