lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 25 Aug 2015 11:33:00 +0100 From: Mel Gorman <mgorman@...hsingularity.net> To: Vlastimil Babka <vbabka@...e.cz> Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>, Rik van Riel <riel@...hat.com>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Michal Hocko <mhocko@...nel.org>, Linux-MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 04/12] mm, page_alloc: Only check cpusets when one exists that can be mem-controlled On Mon, Aug 24, 2015 at 10:53:37PM +0200, Vlastimil Babka wrote: > On 24.8.2015 15:16, Mel Gorman wrote: > >>> > >>> return read_seqcount_retry(¤t->mems_allowed_seq, seq); > >>> @@ -139,7 +141,7 @@ static inline void set_mems_allowed(nodemask_t nodemask) > >>> > >>> #else /* !CONFIG_CPUSETS */ > >>> > >>> -static inline bool cpusets_enabled(void) { return false; } > >>> +static inline bool cpusets_mems_enabled(void) { return false; } > >>> > >>> static inline int cpuset_init(void) { return 0; } > >>> static inline void cpuset_init_smp(void) {} > >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >>> index 62ae28d8ae8d..2c1c3bf54d15 100644 > >>> --- a/mm/page_alloc.c > >>> +++ b/mm/page_alloc.c > >>> @@ -2470,7 +2470,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, > >>> if (IS_ENABLED(CONFIG_NUMA) && zlc_active && > >>> !zlc_zone_worth_trying(zonelist, z, allowednodes)) > >>> continue; > >>> - if (cpusets_enabled() && > >>> + if (cpusets_mems_enabled() && > >>> (alloc_flags & ALLOC_CPUSET) && > >>> !cpuset_zone_allowed(zone, gfp_mask)) > >>> continue; > >> > >> Here the benefits are less clear. I guess cpuset_zone_allowed() is > >> potentially costly... > >> > >> Heck, shouldn't we just start the static key on -1 (if possible), so that > >> it's enabled only when there's 2+ cpusets? > > Hm wait a minute, that's what already happens: > > static inline int nr_cpusets(void) > { > /* jump label reference count + the top-level cpuset */ > return static_key_count(&cpusets_enabled_key) + 1; > } > > I.e. if there's only the root cpuset, static key is disabled, so I think this > patch is moot after all? > static_key_count is an atomic read on a field in struct static_key where as static_key_false is a arch_static_branch which can be eliminated. The patch eliminates an atomic read so I didn't think it was moot. -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists