lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <55888C27.7010100@redhat.com>
Date:	Mon, 22 Jun 2015 18:28:55 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
CC:	linux-kernel@...r.kernel.org, peterz@...radead.org,
	mingo@...nel.org, mgorman@...e.de
Subject: Re: [PATCH] sched,numa: document and fix numa_preferred_nid setting

On 06/22/2015 12:13 PM, Srikar Dronamraju wrote:
>> +	 * migrating the task to where it really belongs.
>> +	 * The exception is a task that belongs to a large numa_group, which
>> +	 * spans multiple NUMA nodes. If that task migrates into one of the
>> +	 * workload's active nodes, remember that node as the task's
>> +	 * numa_preferred_nid, so the workload can settle down.
>>  	 */
>>  	if (p->numa_group) {
>>  		if (env.best_cpu == -1)
>> @@ -1513,7 +1520,7 @@ static int task_numa_migrate(struct task_struct *p)
>>  			nid = env.dst_nid;
>>  
>>  		if (node_isset(nid, p->numa_group->active_nodes))
>> -			sched_setnuma(p, env.dst_nid);
>> +			sched_setnuma(p, nid);
>>  	}
>>  
>>  	/* No better CPU than the current one was found. */
>>
> 
> When I refer to the Modified Rik's patch, I mean to remove the
> node_isset() check before the sched_setnuma. With that change, we kind
> of reduce the numa02 and 1JVMper System regression while getting as good
> numbers as Rik's patch with 2JVM and 4JVM per System.
> 
> The idea behind removing the node_isset check is:
> node_isset is mostly used to track mem movement to nodes where cpus are
> running and not vice versa.  This is as per comment in
> update_numa_active_node_mask. There could be a sitation where task memory
> is all in a node and the node has capacity to accomodate but no tasks
> associated with the task have run enuf on that node. In such a case, we
> shouldnt be ruling out migrating the task to the node.

That is a good point.

However, if overriding the preferred_nid that task_numa_placement
identified is a good idea in task_numa_migrate, would it also be
a good idea for tasks that are NOT part of a numa group?

What are the consequences of never setting preferred_nid from
task_numa_migrate?   (we would try to migrate the task to a
better node more frequently)

What are the consequences of always setting preferred_nid from
task_numa_migrate?   (we would only try migrating the task once,
and it could get stuck in a sub-optimal location)

The patch seems to work, but I do not understand why, and would
like to know your ideas on why you think the patch works.

I am really not looking forward to the idea of maintaining code
that nobody understands...

-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ