lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140124153135.GZ4963@suse.de>
Date:	Fri, 24 Jan 2014 15:31:35 +0000
From:	Mel Gorman <mgorman@...e.de>
To:	riel@...hat.com
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	peterz@...radead.org, mingo@...hat.com, chegu_vinod@...com
Subject: Re: [PATCH 3/9] numa,sched: track from which nodes NUMA faults are
 triggered

On Tue, Jan 21, 2014 at 05:20:05PM -0500, riel@...hat.com wrote:
> From: Rik van Riel <riel@...hat.com>
> 
> Track which nodes NUMA faults are triggered from, in other words
> the CPUs on which the NUMA faults happened. This uses a similar
> mechanism to what is used to track the memory involved in numa faults.
> 
> The next patches use this to build up a bitmap of which nodes a
> workload is actively running on.
> 
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Chegu Vinod <chegu_vinod@...com>
> Signed-off-by: Rik van Riel <riel@...hat.com>
> ---
>  include/linux/sched.h | 10 ++++++++--
>  kernel/sched/fair.c   | 30 +++++++++++++++++++++++-------
>  2 files changed, 31 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index b8f8476..d14d9fe 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1492,6 +1492,14 @@ struct task_struct {
>  	unsigned long *numa_faults_buffer_memory;
>  
>  	/*
> +	 * Track the nodes where faults are incurred. This is not very
> +	 * interesting on a per-task basis, but it help with smarter
> +	 * numa memory placement for groups of processes.
> +	 */
> +	unsigned long *numa_faults_cpu;
> +	unsigned long *numa_faults_buffer_cpu;
> +

/*
 * Track the nodes the process was running on when a NUMA hinting fault
 * was incurred ......
 */

?

Otherwise the comment is very similar to numa_faults_memory. I'm not
that bothered because the name is descriptive enough.


> +	/*
>  	 * numa_faults_locality tracks if faults recorded during the last
>  	 * scan window were remote/local. The task scan period is adapted
>  	 * based on the locality of the faults with different weights
> @@ -1594,8 +1602,6 @@ extern void task_numa_fault(int last_node, int node, int pages, int flags);
>  extern pid_t task_numa_group_id(struct task_struct *p);
>  extern void set_numabalancing_state(bool enabled);
>  extern void task_numa_free(struct task_struct *p);
> -
> -extern unsigned int sysctl_numa_balancing_migrate_deferred;
>  #else
>  static inline void task_numa_fault(int last_node, int node, int pages,
>  				   int flags)

Should this hunk move to patch 1?

Whether you make the changes or not

Acked-by: Mel Gorman <mgorman@...e.de>

In my last review I complained about magic numbers but I see a later
patch has a subject that at least implies it deals with the numbers.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ