lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 11 Jul 2013 15:11:58 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...nel.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 08/16] sched: Reschedule task on preferred NUMA node once
 selected

On Thu, Jul 11, 2013 at 02:03:22PM +0100, Mel Gorman wrote:
> On Thu, Jul 11, 2013 at 02:30:38PM +0200, Peter Zijlstra wrote:
> > On Thu, Jul 11, 2013 at 10:46:52AM +0100, Mel Gorman wrote:
> > > @@ -829,10 +854,29 @@ static void task_numa_placement(struct task_struct *p)
> > >  		}
> > >  	}
> > >  
> > > -	/* Update the tasks preferred node if necessary */
> > > +	/*
> > > +	 * Record the preferred node as the node with the most faults,
> > > +	 * requeue the task to be running on the idlest CPU on the
> > > +	 * preferred node and reset the scanning rate to recheck
> > > +	 * the working set placement.
> > > +	 */
> > >  	if (max_faults && max_nid != p->numa_preferred_nid) {
> > > +		int preferred_cpu;
> > > +
> > > +		/*
> > > +		 * If the task is not on the preferred node then find the most
> > > +		 * idle CPU to migrate to.
> > > +		 */
> > > +		preferred_cpu = task_cpu(p);
> > > +		if (cpu_to_node(preferred_cpu) != max_nid) {
> > > +			preferred_cpu = find_idlest_cpu_node(preferred_cpu,
> > > +							     max_nid);
> > > +		}
> > > +
> > > +		/* Update the preferred nid and migrate task if possible */
> > >  		p->numa_preferred_nid = max_nid;
> > >  		p->numa_migrate_seq = 0;
> > > +		migrate_task_to(p, preferred_cpu);
> > >  	}
> > >  }
> > 
> > Now what happens if the migrations fails? We set numa_preferred_nid to max_nid
> > but then never re-try the migration. Should we not re-try the migration every
> > so often, regardless of whether max_nid changed?
> 
> We do this
> 
> load_balance
> -> active_load_balance_cpu_stop

Note that active balance is rare to begin with.

>   -> move_one_task
>     -> can_migrate_task
>       -> migrate_improves_locality
> 
> If the conditions are right then it'll move the task to the preferred node
> for a number of PTE scans. Of course there is no guarantee that the necessary
> conditions will occur but I was wary of taking more drastic steps in the
> scheduler such as retrying on every fault until the migration succeeds.
> 

Ah, so task_numa_placement() is only called every full scan, not every fault.
Also one could throttle it.

So initially I did all the movement through the regular balancer, but Ingo
found that when the machine grows it quickly becomes unlikely we hit the right
conditions. Hence he also went to direct migrations in his series.

Another thing we might consider is counting the number of migration attempts
and settling for the n-th best node for the n'th attempt and giving up when n
surpasses the quality of the node we're currently on.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ