lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130801154228.GE2296@suse.de>
Date:	Thu, 1 Aug 2013 16:42:28 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...nel.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 16/18] sched: Avoid overloading CPUs on a preferred NUMA
 node

On Thu, Aug 01, 2013 at 12:40:13PM +0530, Srikar Dronamraju wrote:
> > +static int task_numa_find_cpu(struct task_struct *p, int nid)
> > +{
> > +	int node_cpu = cpumask_first(cpumask_of_node(nid));
> > +	int cpu, src_cpu = task_cpu(p), dst_cpu = src_cpu;
> > +	unsigned long src_load, dst_load;
> > +	unsigned long min_load = ULONG_MAX;
> > +	struct task_group *tg = task_group(p);
> > +	s64 src_eff_load, dst_eff_load;
> > +	struct sched_domain *sd;
> > +	unsigned long weight;
> > +	bool balanced;
> > +	int imbalance_pct, idx = -1;
> > 
> > +	/* No harm being optimistic */
> > +	if (idle_cpu(node_cpu))
> > +		return node_cpu;
> 
> Cant this lead to lot of imbalance across nodes? Wont this lead to lot
> of ping-pong of tasks between different nodes resulting in performance
> hit?

Ideally it wouldn't because if we are trying to migrate the task to here in
the first place then it must have been scheduled there for long enough to
accumulate those faults. Now, there might be a ping-pong effect because a
tasks gets moved off by the load balancer because the CPUs are overloaded and
now we're trying to move it back. If we can detect that this is happening
then one way of dealing with it would be to clear p->numa_faults[] when
a task is moved off a node due to compute overload.

> Lets say the system is not fully loaded, something like a numa01
> but with far lesser number of threads probably nr_cpus/2 or nr_cpus/4,
> then all threads will try to move to single node as we can keep seeing
> idle threads. No? Wont it lead all load moving to one node and load
> balancer spreading it out...
> 

I cannot be 100% certain. I'm not strong enough on the scheduler yet and
the compute overloading handling is currently too weak.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ