lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Apr 2023 21:22:19 -0400
From:   Waiman Long <longman@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Jonathan Corbet <corbet@....net>,
        Shuah Khan <shuah@...nel.org>, linux-kernel@...r.kernel.org,
        cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kselftest@...r.kernel.org,
        Juri Lelli <juri.lelli@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Frederic Weisbecker <frederic@...nel.org>
Subject: Re: [RFC PATCH 0/5] cgroup/cpuset: A new "isolcpus" paritition


On 4/12/23 21:55, Waiman Long wrote:
> On 4/12/23 21:17, Tejun Heo wrote:
>> Hello, Waiman.
>>
>> On Wed, Apr 12, 2023 at 08:55:55PM -0400, Waiman Long wrote:
>>>> Sounds a bit contrived. Does it need to be something defined in the 
>>>> root
>>>> cgroup?
>>> Yes, because we need to take away the isolated CPUs from the 
>>> effective cpus
>>> of the root cgroup. So it needs to start from the root. That is also 
>>> why we
>>> have the partition rule that the parent of a partition has to be a 
>>> partition
>>> root itself. With the new scheme, we don't need a special cgroup to 
>>> hold the
>> I'm following. The root is already a partition root and the cgroupfs 
>> control
>> knobs are owned by the parent, so the root cgroup would own the first 
>> level
>> cgroups' cpuset.cpus.reserve knobs. If the root cgroup wants to 
>> assign some
>> CPUs exclusively to a first level cgroup, it can then set that cgroup's
>> reserve knob accordingly (or maybe the better name is
>> cpuset.cpus.exclusive), which will take those CPUs out of the root 
>> cgroup's
>> partition and give them to the first level cgroup. The first level 
>> cgroup
>> then is free to do whatever with those CPUs that now belong 
>> exclusively to
>> the cgroup subtree.
>
> I am OK with the cpuset.cpus.reserve name, but not that much with the 
> cpuset.cpus.exclusive name as it can get confused with cgroup v1's 
> cpuset.cpu_exclusive. Of course, I prefer the cpuset.cpus.isolated 
> name a bit more. Once an isolated CPU gets used in an isolated 
> partition, it is exclusive and it can't be used in another isolated 
> partition.
>
> Since we will allow users to set cpuset.cpus.reserve to whatever value 
> they want. The distribution of isolated CPUs is only valid if the cpus 
> are present in its parent's cpuset.cpus.reserve and all the way up to 
> the root. It is a bit expensive, but it should be a relatively rare 
> operation.

I now have a slightly different idea of how to do that. We already have 
an internal cpumask for partitioning - subparts_cpus. I am thinking 
about exposing it as cpuset.cpus.reserve. The current way of creating 
subpartitions will be called automatic reservation and require a direct 
parent/child partition relationship. But as soon as a user write 
anything to it, it will break automatic reservation and require manual 
reservation going forward.

In that way, we can keep the old behavior, but also support new use 
cases. I am going to work on that.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ