[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150824131616.GK12432@techsingularity.net>
Date: Mon, 24 Aug 2015 14:16:16 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Michal Hocko <mhocko@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 04/12] mm, page_alloc: Only check cpusets when one exists
that can be mem-controlled
On Mon, Aug 24, 2015 at 02:37:41PM +0200, Vlastimil Babka wrote:
> >
> >+/* Returns true if a cpuset exists that can set cpuset.mems */
> >+static inline bool cpusets_mems_enabled(void)
> >+{
> >+ return nr_cpusets() > 1;
> >+}
> >+
>
> Hm, but this loses the benefits of static key branches?
> How about something like:
>
> if (static_key_false(&cpusets_enabled_key))
> return nr_cpusets() > 1
> else
> return false;
>
Will do.
>
>
> > static inline void cpuset_inc(void)
> > {
> > static_key_slow_inc(&cpusets_enabled_key);
> >@@ -104,7 +106,7 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p);
> > */
> > static inline unsigned int read_mems_allowed_begin(void)
> > {
> >- if (!cpusets_enabled())
> >+ if (!cpusets_mems_enabled())
> > return 0;
> >
> > return read_seqcount_begin(¤t->mems_allowed_seq);
> >@@ -118,7 +120,7 @@ static inline unsigned int read_mems_allowed_begin(void)
> > */
> > static inline bool read_mems_allowed_retry(unsigned int seq)
> > {
> >- if (!cpusets_enabled())
> >+ if (!cpusets_mems_enabled())
> > return false;
>
> Actually I doubt it's much of benefit for these usages, even if the static
> key benefits are restored. If there's a single root cpuset, we would check
> the seqlock prior to this patch, now we'll check static key value (which
> should have the same cost?). With >1 cpusets, we would check seqlock prior
> to this patch, now we'll check static key value *and* the seqlock...
>
If the cpuset is enabled between the check, it still should retry.
Anyway, special casing this is overkill. It's a small
micro-optimisation.
> >
> > return read_seqcount_retry(¤t->mems_allowed_seq, seq);
> >@@ -139,7 +141,7 @@ static inline void set_mems_allowed(nodemask_t nodemask)
> >
> > #else /* !CONFIG_CPUSETS */
> >
> >-static inline bool cpusets_enabled(void) { return false; }
> >+static inline bool cpusets_mems_enabled(void) { return false; }
> >
> > static inline int cpuset_init(void) { return 0; }
> > static inline void cpuset_init_smp(void) {}
> >diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >index 62ae28d8ae8d..2c1c3bf54d15 100644
> >--- a/mm/page_alloc.c
> >+++ b/mm/page_alloc.c
> >@@ -2470,7 +2470,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> > if (IS_ENABLED(CONFIG_NUMA) && zlc_active &&
> > !zlc_zone_worth_trying(zonelist, z, allowednodes))
> > continue;
> >- if (cpusets_enabled() &&
> >+ if (cpusets_mems_enabled() &&
> > (alloc_flags & ALLOC_CPUSET) &&
> > !cpuset_zone_allowed(zone, gfp_mask))
> > continue;
>
> Here the benefits are less clear. I guess cpuset_zone_allowed() is
> potentially costly...
>
> Heck, shouldn't we just start the static key on -1 (if possible), so that
> it's enabled only when there's 2+ cpusets?
It's overkill for the amount of benefit.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists