lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <446ab203-85cd-32ff-40a9-0ba22d5a2534@redhat.com>
Date:   Mon, 13 Aug 2018 13:56:15 -0400
From:   Waiman Long <longman@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Li Zefan <lizefan@...wei.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Ingo Molnar <mingo@...hat.com>, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
        kernel-team@...com, pjt@...gle.com, luto@...capital.net,
        Mike Galbraith <efault@....de>, torvalds@...ux-foundation.org,
        Roman Gushchin <guro@...com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Patrick Bellasi <patrick.bellasi@....com>
Subject: Re: [PATCH v11 7/9] cpuset: Expose cpus.effective and mems.effective
 on cgroup v2 root

On 07/20/2018 01:41 PM, Tejun Heo wrote:
> Hello,
>
> On Fri, Jul 20, 2018 at 01:09:23PM -0400, Waiman Long wrote:
>> On 07/20/2018 12:37 PM, Peter Zijlstra wrote:
>>> On Fri, Jul 20, 2018 at 12:19:29PM -0400, Waiman Long wrote:
>>>> I am not against the idea of making it hierarchical eventually. I am
>>>> just hoping to get thing going by merging the patchset in its current
>>>> form and then we can make it hierarchical in a followup patch.
>>> Where's the rush? Why can't we do this right in one go?
>> For me, the rush comes from RHEL8 as it is a goal to have a fully
>> functioning cgroup v2 in that release.
>>
>> I also believe that most of the use cases of partition can be satisfied
>> with partitions at the first level children. Getting hierarchical
>> partition right may drag on for half a year, maybe, giving our history
>> with cpu v2 controller. No matter what we do to enable hierarchical
>> partition in the future, the current model of using a partition flag is
>> intuitive enough that it won't be changed at least for the first level
>> children.
> I'm fully with Waiman here.  There are people wanting to use it and
> the part most people isn't controversial at all.  I don't see what'd
> be gained by further delaying the whole thing.  If the first level
> partition thing isn't acceptable to everyone, we can even strip down
> further.  We can get .cpus and .mems merged first, which is what most
> people want anyway.

BTW, I am trying to support hierarchical partition. The first thing that
I want to support is to allow removing CPUs from partition root freely.
It turns out that the following existing code in validate_change() will
prevent the removal from happening when it touches any CPUs that are
used in child cpusets:

        /* Each of our child cpusets must be a subset of us */
        ret = -EBUSY;
        cpuset_for_each_child(c, css, cur)
                if (!is_cpuset_subset(c, trial))
                        goto out;

So this is not a new restriction after all.

The following restrictions are still imposed on a partition root wrt
allowable changes in cpuset.cpus:
1) cpuset.cpus cannot be set to "". There must be at least 1 cpu there.
2) Adding cpus that are not in parent's cpuset.cpus (as well as
cpuset.cpus.effective) or that will take all the parent's effective cpus
away is not allowed.

So are these limitations acceptable?

The easiest way to remove those restrictions is to forcefully turn off
the cpuset.sched.partition flag in the cpuset as well as any
sub-partitions when the user try to do that. With that change, there
will be no more new restriction on what you can do on cpuset.cpus.

What is your opinion on the best way forward wrt supporting hierarchical
partitioning?

Thanks,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ