[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3226af2-5d19-5377-9072-179388cb2609@redhat.com>
Date: Mon, 20 Jun 2022 09:51:42 +0200
From: David Hildenbrand <david@...hat.com>
To: Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org,
corbet@....net, mike.kravetz@...cle.com, osalvador@...e.de,
paulmck@...nel.org
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, duanxiongchun@...edance.com, smuchun@...il.com
Subject: Re: [PATCH v4 1/2] mm: memory_hotplug: enumerate all supported
section flags
On 19.06.22 15:38, Muchun Song wrote:
> We are almost running out of section flags, only one bit is available in
> the worst case (powerpc with 256k pages). However, there are still some
> free bits (in ->section_mem_map) on other architectures (e.g. x86_64 has
> 10 bits available, arm64 has 8 bits available with worst case of 64K
> pages). We have hard coded those numbers in code, it is inconvenient to
> use those bits on other architectures except powerpc. So transfer those
> section flags to enumeration to make it easy to add new section flags in
> the future. Also, move SECTION_TAINT_ZONE_DEVICE into the scope of
> CONFIG_ZONE_DEVICE to save a bit on non-zone-device case.
>
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> include/linux/mmzone.h | 44 +++++++++++++++++++++++++++++++++++---------
> mm/memory_hotplug.c | 6 ++++++
> mm/sparse.c | 2 +-
> 3 files changed, 42 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index aab70355d64f..932843c6459b 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1418,16 +1418,35 @@ extern size_t mem_section_usage_size(void);
> * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
> * worst combination is powerpc with 256k pages,
> * which results in PFN_SECTION_SHIFT equal 6.
> - * To sum it up, at least 6 bits are available.
> + * To sum it up, at least 6 bits are available on all architectures.
> + * However, we can exceed 6 bits on some other architectures except
> + * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available
> + * with the worst case of 64K pages on arm64) if we make sure the
> + * exceeded bit is not applicable to powerpc.
> */
> -#define SECTION_MARKED_PRESENT (1UL<<0)
> -#define SECTION_HAS_MEM_MAP (1UL<<1)
> -#define SECTION_IS_ONLINE (1UL<<2)
> -#define SECTION_IS_EARLY (1UL<<3)
> -#define SECTION_TAINT_ZONE_DEVICE (1UL<<4)
> -#define SECTION_MAP_LAST_BIT (1UL<<5)
> -#define SECTION_MAP_MASK (~(SECTION_MAP_LAST_BIT-1))
> -#define SECTION_NID_SHIFT 6
> +enum {
> + SECTION_MARKED_PRESENT_BIT,
> + SECTION_HAS_MEM_MAP_BIT,
> + SECTION_IS_ONLINE_BIT,
> + SECTION_IS_EARLY_BIT,
> +#ifdef CONFIG_ZONE_DEVICE
> + SECTION_TAINT_ZONE_DEVICE_BIT,
> +#endif
> + SECTION_MAP_LAST_BIT,
> +};
> +
> +enum {
> + SECTION_MARKED_PRESENT = BIT(SECTION_MARKED_PRESENT_BIT),
> + SECTION_HAS_MEM_MAP = BIT(SECTION_HAS_MEM_MAP_BIT),
> + SECTION_IS_ONLINE = BIT(SECTION_IS_ONLINE_BIT),
> + SECTION_IS_EARLY = BIT(SECTION_IS_EARLY_BIT),
> +#ifdef CONFIG_ZONE_DEVICE
> + SECTION_TAINT_ZONE_DEVICE = BIT(SECTION_TAINT_ZONE_DEVICE_BIT),
> +#endif
> +};
I can understand the reason for the other enum, to auto-assing numbers.
What's the underlying reason for the enum here? Personally, I'd just
stay with defines, so I'm curious :)
LGTM
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists