lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 06 Jun 2014 14:23:41 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	linux-kernel@...r.kernel.org, mgorman@...e.de, mingo@...nel.org
Subject: Re: [PATCH] sched,numa: always try to migrate to preferred node at
 task_numa_placement time

On 06/06/2014 01:18 PM, Peter Zijlstra wrote:
> On Wed, Jun 04, 2014 at 04:33:15PM -0400, Rik van Riel wrote:
>> It is possible that at task_numa_placement time, the task's
>> numa_preferred_nid does not change, but the task is not
>> actually running on the preferred node at the time.
>>
>> In that case, we still want to attempt migration to the
>> preferred node.
> 
> So we have that numa_migrate_retry which was supposed to keep kicking
> the task until it got where it needed to go.

It does, and it appears to work.

> But now you continuously kick from task_numa_placement().

No, we only kick from task_numa_placement() if the task is not
already running on its preferred nid.

> Clearly the retry thing didn't work, what happened? We got to the
> preferred nid, disabled the retry and got moved away again?
> 
> Do we want to remove the retry logic in favour of this more aggressive
> form?

I think we want both. When we have fresh statistics, and we discover
that we are not running on our preferred nid, is there any reason
not to relocate to a better node?

Moving a task to another node is cheap, and moving it sooner means
we can end up avoiding migrating memory around twice.

>> @@ -1575,11 +1575,13 @@ static void task_numa_placement(struct task_struct *p)
> 
>> +	if (max_faults) {
>> +		/* Set the new preferred node */
>> +		if (max_nid != p->numa_preferred_nid)
>> +			sched_setnuma(p, max_nid);
>> +
>> +		if (task_node(p) != p->numa_preferred_nid)
>> +			numa_migrate_preferred(p);
>>  	}
>>  
> 
> 


-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ