lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170406073436.GD5497@dhcp22.suse.cz>
Date:   Thu, 6 Apr 2017 09:34:36 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH] sched: Fix numabalancing to work with isolated cpus

On Thu 06-04-17 12:49:50, Srikar Dronamraju wrote:
> > > > > The isolated cpus are part of the cpus allowed list. In the above case,
> > > > > numabalancing ends up scheduling some of these tasks on isolated cpus.
> > > > 
> > > > Why is this bad? If the task is allowed to run on isolated CPUs then why
> > > 
> > > 1. kernel-parameters.txt states: isolcpus as "Isolate CPUs from the
> > > general scheduler." So the expectation that numabalancing can schedule
> > > tasks on it is wrong.
> > 
> > Right but if the task is allowed to run on isolated cpus then the numa
> > balancing for this taks should be allowed to run on those cpus, no?
> 
> No numabalancing or any other scheduler balancing should be looking at
> tasks that are bound to isolated cpus.

Is this documented anywhere? My understanding of isolcpus is to make
sure that nothing _outside_ of the dedicated workload interfers. But why
shouldn't the dedicated workload be numa balanced is not clear to me at
all.

> Similar example that I gave in my reply to Mel.
> 
> Lets consider 2 node, 24 core with 12 cores in each node.
> Cores 0-11 in Node 1 and cores 12-23 in the other node.
> Lets also disable smt/hyperthreading, enable isolcpus from core
> 6-11,12-17.  Lets run 48 thread ebizzy workload and give it a cpu list
> of say 11,12-17 using taskset.
> 
> Now all the 48 ebizzy threads will only run on core 11. It will never
> spread to other cores even in the same node(or in the same node/but
> isolated cpus) or to the different nodes. i.e even if numabalancing is
> running or not, even if my fix is around or not, all threads will be
> confined to core 11, even though the cpus_allowed is 11,12-17.

Isn't that a bug in isolcpus implementation? It is certainly an
unexpected behavior I would say. Is this documented anywhere?

> > Say your application would be bound _only_ to isolated cpus. Should that
> > imply no numa balancing at all?
> 
> Yes, it implies no numa balancing.
> 
> > 
> > > 2. If numabalancing was disabled, the task would never run on the
> > > isolated CPUs.
> > 
> > I am confused. I thought you said "However a task might call
> > sched_setaffinity() that includes all possible cpus in the system
> > including the isolated cpus." So the task is allowed to run there.
> > Or am I missing something?
> > 
> 
> Peter, Rik, Ingo can correct me here.
> 
> I feel most programs that call sched_setaffinity including perf bench
> are written with an assumption that they are never run with isolcpus.

Isn't sched_setaffinity the only way how to actually make it possible to
run on isolcpus?

> > Please note that I do not claim the patch is wrong. I am still not sure
> > myself but the chagelog is missing the most important information "why
> > the change is the right thing".
> 
>  I am open to editing the changelog, I assumed that isolcpus kernel
>  parameter was clear that no scheduling algorithms can interfere with
>  isolcpus. Would stating this in the changelog clarify to you that this
>  change is right?

I would really like to see it confirmed by the scheduler maintainers and
documented properly as well. What you are claiming here is rather
surprising to my understanding of what isolcpus acutally is.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ