lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Jun 2018 08:56:18 -0700
From:   Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To:     Rik van Riel <riel@...riel.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 04/19] sched/numa: Set preferred_node based on best_cpu

* Rik van Riel <riel@...riel.com> [2018-06-04 10:37:30]:

> On Mon, 2018-06-04 at 05:59 -0700, Srikar Dronamraju wrote:
> > * Peter Zijlstra <peterz@...radead.org> [2018-06-04 14:23:36]:
> > 
> > > > -		if (ng->active_nodes > 1 &&
> > > > numa_is_active_node(env.dst_nid, ng))
> > > > -			sched_setnuma(p, env.dst_nid);
> > > > +		if (nid != p->numa_preferred_nid)
> > > > +			sched_setnuma(p, nid);
> > > >  	}
> > > 
> > 
> > I think checking for active_nodes before calling sched_setnuma was a
> > mistake.
> > 
> > Before this change, we may be retaining numa_preferred_nid to be the
> > source node while we select another node with better numa affinity to
> > run on. 
> 
> Sometimes workloads are so large they get spread
> around to multiple NUMA nodes.
> 
> In that case, you do NOT want all the tasks of
> that workload (numa group) to try squeezing onto
> the same load, only to have the load balancer
> randomly move tasks off of that node again later.
> 
> How do you keep that from happening?

Infact we are exactly doing this now in all cases. We are not changing
anything in the ng->active_node > 1 case. (which is the workload spread
across multiple nodes).

Earlier we would not set the numa_preferred_nid if there is only one
active node. However its not certain that the src node is the active
node. Infact its most likely not going to be src node because we
wouldn't have reached here if the task was running on the source node.
Keeping the numa_preferred_nid as the source node increases the chances
of the regular load balancer randomly moving tasks from the node.  Now
we are making sure task_node(p) and numa_preferred_nid. Hence we would
reduce the risk of moving to a random node.

Hope this is clear.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ