lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Jan 2014 17:57:47 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	riel@...hat.com
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	chegu_vinod@...com, mgorman@...e.de, mingo@...hat.com
Subject: Re: [PATCH 6/7] numa,sched: normalize faults_from stats and weigh by
 CPU use

On Fri, Jan 17, 2014 at 04:12:08PM -0500, riel@...hat.com wrote:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 0af6c1a..52de567 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1471,6 +1471,8 @@ struct task_struct {
>  	int numa_preferred_nid;
>  	unsigned long numa_migrate_retry;
>  	u64 node_stamp;			/* migration stamp  */
> +	u64 last_task_numa_placement;
> +	u64 last_sum_exec_runtime;
>  	struct callback_head numa_work;
>  
>  	struct list_head numa_entry;

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8e0a53a..0d395a0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1422,11 +1422,41 @@ static void update_task_scan_period(struct task_struct *p,
>  	memset(p->numa_faults_locality, 0, sizeof(p->numa_faults_locality));
>  }
>  
> +/*
> + * Get the fraction of time the task has been running since the last
> + * NUMA placement cycle. The scheduler keeps similar statistics, but
> + * decays those on a 32ms period, which is orders of magnitude off
> + * from the dozens-of-seconds NUMA balancing period. Use the scheduler
> + * stats only if the task is so new there are no NUMA statistics yet.
> + */
> +static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period)
> +{
> +	u64 runtime, delta, now;
> +	/* Use the start of this time slice to avoid calculations. */
> +	now = p->se.exec_start;
> +	runtime = p->se.sum_exec_runtime;
> +
> +	if (p->last_task_numa_placement) {
> +		delta = runtime - p->last_sum_exec_runtime;
> +		*period = now - p->last_task_numa_placement;
> +	} else {
> +		delta = p->se.avg.runnable_avg_sum;
> +		*period = p->se.avg.runnable_avg_period;
> +	}
> +
> +	p->last_sum_exec_runtime = runtime;
> +	p->last_task_numa_placement = now;
> +
> +	return delta;
> +}

Have you tried what happens if you use p->se.avg.runnable_avg_sum /
p->se.avg.runnable_avg_period instead? If that also works it avoids
growing the datastructures and keeping of yet another set of runtime
stats.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ