lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Aug 2015 14:00:54 -0700
From:	Paul Turner <pjt@...gle.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Austin S Hemmelgarn <ahferroin7@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>, lizefan@...wei.com,
	cgroups <cgroups@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	kernel-team <kernel-team@...com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 1:25 PM, Tejun Heo <tj@...nel.org> wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 04:00:49PM -0400, Austin S Hemmelgarn wrote:
>> >That alone doesn't require hierarchical resource distribution tho.
>> >Setting nice levels reasonably is likely to alleviate most of the
>> >problem.
>>
>> In the cases I've dealt with this myself, nice levels didn't cut it, and I
>> had to resort to SCHED_RR with particular care to avoid priority inversions.
>
> I wonder why.  The difference between -20 and 20 is around 2500x in
> terms of weight.  That should have been enough for expressing whatever
> precedence the vcpus should have over other threads.

This strongly perturbs the load-balancer which performs busiest cpu
selection by weight.

Note that also we do not necessarily want total dominance by vCPU
threads, the hypervisor threads are almost always doing work on their
behalf and we want to provision them with _some_ time.  A
sub-hierarchy allows this to be performed in a way that is independent
of how many vCPUs or support threads that are present.

>
>> >I don't know.  "Someone running one or two VM's on a laptop under
>> >QEMU" doesn't really sound like the use case which absolutely requires
>> >hierarchical cpu cycle distribution.
>>
>> It depends on the use case.  I never have more than 2 VM's running on my
>> laptop (always under QEMU, setting up Xen is kind of pointless ona quad core
>> system with only 8G of RAM), and I take extensive advantage of the cpu
>> cgroup to partition resources among various services on the host.
>
> Hmmm... I'm trying to understand the usecases where having hierarchy
> inside a process are actually required so that we don't end up doing
> something complex unnecessarily.  So far, it looks like an easy
> alternative for qemu would be teaching it to manage priorities of its
> threads given that the threads are mostly static - vcpus going up and
> down are explicit operations which can trigger priority adjustments if
> necessary, which is unlikely to begin with.

What you're proposing is both unnecessarily complex and imprecise.
Arbitrating competition between groups of threads is exactly why we
support sub-hierarchies within cpu.

>
> Thanks.
>
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ