lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1397237673.7113.22.camel@joe-AO722>
Date:	Fri, 11 Apr 2014 10:34:33 -0700
From:	Joe Perches <joe@...ches.com>
To:	riel@...hat.com
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	peterz@...radead.org, chegu_vinod@...com, mgorman@...e.de
Subject: Re: [PATCH 1/3] sched,numa: count pages on active node as local

On Fri, 2014-04-11 at 13:00 -0400, riel@...hat.com wrote:
> This should reduce the overhead of the automatic NUMA placement
> code, when a workload spans multiple NUMA nodes.

trivial style note:

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
[]
> @@ -1737,6 +1737,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
>  	struct task_struct *p = current;
>  	bool migrated = flags & TNF_MIGRATED;
>  	int cpu_node = task_node(current);
> +	int local = !!(flags & TNF_FAULT_LOCAL);

Perhaps local would look nicer as bool
and be better placed next to migrated.

	bool migrated = flags & TNF_MIGRATED;
	bool local = flags & TNF_FAULT_LOCAL;
	int cpu_node = task_node(current);

> @@ -1785,6 +1786,17 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
> +	if (!priv && !local && p->numa_group &&
> +			node_isset(cpu_node, p->numa_group->active_nodes) &&
> +			node_isset(mem_node, p->numa_group->active_nodes))
> +		local = 1;

		local = true;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ