lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7bd2c5c9-edb4-c071-0d24-28c6744f826b@redhat.com>
Date:   Tue, 9 Aug 2022 16:15:24 -0400
From:   Waiman Long <longman@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Will Deacon <will@...nel.org>, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity

On 8/9/22 15:55, Tejun Heo wrote:
> (cc'ing Linus)
>
> Hello,
>
> On Mon, Aug 01, 2022 at 11:41:24AM -0400, Waiman Long wrote:
>> It was found that any change to the current cpuset hierarchy may reset
>> the cpumask of the tasks in the affected cpusets to the default cpuset
>> value even if those tasks have cpus affinity explicitly set by the users
>> before. That is especially easy to trigger under a cgroup v2 environment
>> where writing "+cpuset" to the root cgroup's cgroup.subtree_control
>> file will reset the cpus affinity of all the processes in the system.
>>
>> That is problematic in a nohz_full environment where the tasks running
>> in the nohz_full CPUs usually have their cpus affinity explicitly set
>> and will behave incorrectly if cpus affinity changes.
>>
>> Fix this problem by looking at user_cpus_ptr which will be set if
>> cpus affinity have been explicitly set before and use it to restrcit
>> the given cpumask unless there is no overlap. In that case, it will
>> fallback to the given one.
>>
>> With that change in place, it was verified that tasks that have its
>> cpus affinity explicitly set will not be affected by changes made to
>> the v2 cgroup.subtree_control files.
> The fact that the kernel clobbers user-specified cpus_allowed as cpu
> availability changes always bothered me and it has been causing this sort of
> problems w/ cpu hotplug and cpuset. We've been patching this up partially
> here and there but I think it would be better if we just make the rules
> really simple - ie. allow users to configure whatever cpus_allowed as long
> as that's within cpu_possible_mask and override only the effective
> cpus_allowed if the mask leaves no runnable CPUs, so that we can restore the
> original configured behavior if and when some of the cpus become available
> again.
>
> One obvious problem with changing the behavior is that it may affect /
> confuse users expecting the current behavior however inconsistent it may be,
> but given that we have partially changed how cpus_allowed interacts with
> hotplug in the past and the current behavior can be inconsistent and
> surprising, I don't think this is a bridge we can't cross. What do others
> think?

My patch will still subject the cpus_allowed list to the constraint 
imposed by the current cpuset. It will keep as much of what the user 
specified though. If we are worrying about backward compatibility, maybe 
we can restrict that change in behavior to cgroup v2 only or we can add 
a sysctl parameter to restore old behavior if the user choose to.

Users are now gradually migrating over to cgroup v2 and they do 
understand that there are some changes in behavior when using cgroup v2.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ