lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 10 Mar 2014 17:52:07 +0800
From:	Viresh Kumar <viresh.kumar@...aro.org>
To:	Kevin Hilman <khilman@...aro.org>
Cc:	Tejun Heo <tj@...nel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Zoran Markovic <zoran.markovic@...aro.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Shaibal Dutta <shaibal.dutta@...adcom.com>,
	Dipankar Sarma <dipankar@...ibm.com>
Subject: Re: [RFC PATCH] rcu: move SRCU grace period work to power efficient workqueue

On Sat, Feb 15, 2014 at 7:24 AM, Kevin Hilman <khilman@...aro.org> wrote:
> From 902a2b58d61a51415457ea6768d687cdb7532eff Mon Sep 17 00:00:00 2001
> From: Kevin Hilman <khilman@...aro.org>
> Date: Fri, 14 Feb 2014 15:10:58 -0800
> Subject: [PATCH] workqueue: for NO_HZ_FULL, set default cpumask to
>  !tick_nohz_full_mask
>
> To help in keeping NO_HZ_FULL CPUs isolated, keep unbound workqueues
> from running on full dynticks CPUs.  To do this, set the default
> workqueue cpumask to be the set of "housekeeping" CPUs instead of all
> possible CPUs.
>
> This is just just the starting/default cpumask, and can be overridden
> with all the normal ways (NUMA settings, apply_workqueue_attrs and via
> sysfs for workqueus with the WQ_SYSFS attribute.)
>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>
> Cc: Frederic Weisbecker <fweisbec@...il.com>
> Signed-off-by: Kevin Hilman <khilman@...aro.org>
> ---
>  kernel/workqueue.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 987293d03ebc..9a9d9b0eaf6d 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -48,6 +48,7 @@
>  #include <linux/nodemask.h>
>  #include <linux/moduleparam.h>
>  #include <linux/uaccess.h>
> +#include <linux/tick.h>
>
>  #include "workqueue_internal.h"
>
> @@ -3436,7 +3437,11 @@ struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask)
>         if (!alloc_cpumask_var(&attrs->cpumask, gfp_mask))
>                 goto fail;
>
> +#ifdef CONFIG_NO_HZ_FULL
> +       cpumask_complement(attrs->cpumask, tick_nohz_full_mask);
> +#else
>         cpumask_copy(attrs->cpumask, cpu_possible_mask);
> +#endif
>         return attrs;
>  fail:
>         free_workqueue_attrs(attrs);

Can we play with this mask at runtime? I thought a better idea would be
to keep this mask as mask of all CPUs initially and once any CPU enters
NO_HZ_FULL mode we can remove that from mask? And ones it leaves
that mode we can get that added again..

I am looking to use similar concept in case of un-pinned timers with my
activity around cpuset.quiesce option..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ