lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <635f6b30-06f2-4ab0-b91c-5d4da730a8cc@redhat.com>
Date: Wed, 4 Sep 2024 10:09:24 -0400
From: Waiman Long <longman@...hat.com>
To: Phil Auld <pauld@...hat.com>, Peter Zijlstra <peterz@...radead.org>
Cc: Frederic Weisbecker <frederic@...nel.org>, Ingo Molnar
 <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
 Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sched/isolation: Add HK_FLAG_SCHED to nohz_full

On 9/4/24 09:44, Phil Auld wrote:
> Throwing my 2 cents in ...
>
> On Wed, Sep 04, 2024 at 03:04:45PM +0200 Peter Zijlstra wrote:
>> On Wed, Sep 04, 2024 at 02:44:26PM +0200, Frederic Weisbecker wrote:
>>> Le Tue, Sep 03, 2024 at 09:23:53PM -0400, Waiman Long a écrit :
>>>>> After discussing with Peter lately, the rules should be:
>>>>>
>>>>> 1) If a nohz_full CPU is part of a multi-CPU domain, then it should
>>>>>      be part of load balancing. Peter even says that nohz_full should be
>>>>>      forbidden in this case, because the tick plays a role in the
>>>>>      load balancing.
>>>> My understand is that most users will use nohz_full together with isolcpus.
>>>> So nohz_full CPUs are also isolated and not in a sched domain. There may
>>>> still be user setting nohz_full without isolcpus though, but that should be
>>>> relatively rare.
>>> Apparently there are users wanting to use isolation along with automatic
>>> containers deployments such as kubernetes, which doesn't seem to work
>>> well with isolcpus...
> Definitely this ^^^
>
>> I've been proposing to get rid of isolcpus for at least the last 15
>> years or so. There just isn't a good reason to ever use it. We were
>> close and then the whole NOHZ_FULL thing came along.
>>
>> You can create single CPU partitions using cpusets dynamically.
>>
> This is somewhat new though...   Although you could turn off load balancing with
> groups v1, in v2 you could not.
That is not true for the newer kernels. It was true that turning off 
load balancing isn't possible with the initial version of cpuset v2. 
That capability was later added with the isolated partition feature in 
cpuset v2.
>
>
>>>> Anyway, all these nohz_full/kernel_nose setting will only apply to CPUs in
>>>> isolated cpuset partitions which will not be in a sched domain.
>>>>
>>>>> 2) Otherwise, if CPU is not part of a domain or it is the only CPU of all its
>>>>>      domains, then it can be out of the load balancing machinery.
>>>> I am aware that a single-cpu domain is the same as being isolated with no
>>>> load balancing.
>>> By the way is it possible to have a single-cpu domain (sorry I'm a noob here)
>>> or do such CPU always end up on a null domain?
>> IIRC they always end up with the null domain; but its been a while. It
>> simply doesn't make much sense to have a 1 cpu domain. The way the
>> topology code works is by always building the full domain tree, and then
>> throwing away all levels that do not contribute, and in the 1 cpu case,
>> that would be all of them.
>>
>> Look for 'degenerate' in kernel/sched/topology.c.
>>
>>>>> I'm a bit scared about rule 1) because I know there are existing users of
>>>>> nohz_full on multi-CPU domains... So I feel a bit trapped.
>>>> As stated before, this is not a common use case.
>>> Not sure and anyway it's not a forbidden usecase. But this is anyway outside
>>> the scope of this patchset.
>> Most crucially, it is a completely broken setup. It doesn't actually
>> work well.
>>
>> Taking it away will force people to fix their broken. That's a good
>> thing, no?
>>
> I wonder if people using nohz_full full w/o isolcpus is a bit of a historical
> artifact.  It used to be the case that you could use nohz and then turn off load
> balancing from userspace using the domain flags.  That allowed it to work w/o
> isolcpus. When that ability was removed isolcpus often had to be used as a
> replacement. Probably accounts for some of its increased use.
>
> It takes a while for all of these changes to bubble up and get used by the
> layers above. It's generally not download and build the new kernel and adjust
> your configuration settings and scripts :)

Yes, it will take a while for the users to use the new features 
available and abandon the old ways.


>
>
>>>> The isolcpus boot option is deprecated, as stated in kernel-parameters.txt.
>>> We should undeprecate it, apparently it's still widely used. Perhaps by people
>>> who can't afford to use cpusets/cgroups.
>> What is the actual problem with using cpusets? At the very least the
>> whole nohz_full thing needs to be moved into cpusets so it isn't a fixed
>> boot time thing anymore.
> Until fairly recently you could not do the isolation with cgroups v2.
>
> I agree with the direction Waiman et al. are heading. Getting rid of the
> kernel commandline settings for this is a good thing.
>
>>>> My plan is to deprecate nohz_full as well once we are able to make dynamic
>>>> CPU isolation via cpuset works almost as good as isolcpus + nohz_full.
>>> You can't really deprecate such a kernel boot option unfortunately. Believe me
>>> I wish we could.
>> Why not? As I said, the only thing that's kept it around, and worse,
>> made it more popular again, is this nohz_full nonsense. That never
>> should've used isolcpus, but that's not something we can do anything
>> about now.
> Deprecate-and-not-remove should be fine. Then in 15 years we can revisit it
> like isolcpus ;)

Yes, it will take a while :-)

Cheers,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ