[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZMwVanb0nTbOiWyn@yury-ThinkPad>
Date: Thu, 3 Aug 2023 14:00:26 -0700
From: Yury Norov <yury.norov@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: andriy.shevchenko@...ux.intel.com, linux@...musvillemoes.dk,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mateusz Guzik <mjguzik@...il.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
tglx@...utronix.de, rppt@...nel.org
Subject: Re: [PATCH v2 2/2] mm,nodemask: Use nr_node_ids
> Consider MAX_NUMNODES == 64 and nr_node_ids == 4. Then
> small_nodemask_bits == 64.
>
> The nodes_full() will set all 64 bits:
>
> #define nodes_full(nodemask) __nodes_full(&(nodemask), small_nodemask_bits)
> static inline bool __nodes_full(const nodemask_t *srcp, unsigned int nbits)
> {
> return bitmap_full(srcp->bits, nbits);
> }
Damn, copied the wrong function. This should be nodes_setall() of
course:
#define nodes_setall(dst) __nodes_setall(&(dst), large_nodemask_bits)
static inline void __nodes_setall(nodemask_t *dstp, unsigned int nbits)
{
bitmap_fill(dstp->bits, nbits);
}
> And the following nodes_weight() will return 64:
>
> #define nodes_weight(nodemask) __nodes_weight(&(nodemask), small_nodemask_bits)
> static inline int __nodes_weight(const nodemask_t *srcp, unsigned int nbits)
> {
> return bitmap_weight(srcp->bits, nbits);
> }
>
> Which is definitely wrong because there's 4 nodes at max. To solve
> this problem, both cpumask and nodemask implementations share the same
> rule: all bits beyond nr_{node,cpumask}_bits must be always cleared.
>
> See how cpumask_setall() implements that:
>
> static inline void cpumask_setall(struct cpumask *dstp)
> {
> // Make sure we don't break the optimization
> if (small_const_nbits(small_cpumask_bits)) {
> cpumask_bits(dstp)[0] = BITMAP_LAST_WORD_MASK(nr_cpumask_bits);
> return;
> }
>
> // Pass the exact (runtime) number of bits
> bitmap_fill(cpumask_bits(dstp), nr_cpumask_bits);
> }
>
> Hope that makes sense.
>
> Thanks,
> Yury
Powered by blists - more mailing lists