[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <77f9dd55-fc5c-44c8-b7ac-eac68c1d378f@suse.cz>
Date: Wed, 28 May 2025 10:21:46 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Juan Yescas <jyescas@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Mike Rapoport
<rppt@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>, Zi Yan <ziy@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: tjmercier@...gle.com, isaacmanjarres@...gle.com, kaleshsingh@...gle.com,
masahiroy@...nel.org, Minchan Kim <minchan@...nel.org>
Subject: Re: [PATCH v7] mm: Add CONFIG_PAGE_BLOCK_ORDER to select page block
order
On 5/21/25 23:57, Juan Yescas wrote:
> Problem: On large page size configurations (16KiB, 64KiB), the CMA
> alignment requirement (CMA_MIN_ALIGNMENT_BYTES) increases considerably,
> and this causes the CMA reservations to be larger than necessary.
> This means that system will have less available MIGRATE_UNMOVABLE and
> MIGRATE_RECLAIMABLE page blocks since MIGRATE_CMA can't fallback to them.
>
> The CMA_MIN_ALIGNMENT_BYTES increases because it depends on
> MAX_PAGE_ORDER which depends on ARCH_FORCE_MAX_ORDER. The value of
> ARCH_FORCE_MAX_ORDER increases on 16k and 64k kernels.
>
> For example, in ARM, the CMA alignment requirement when:
>
> - CONFIG_ARCH_FORCE_MAX_ORDER default value is used
> - CONFIG_TRANSPARENT_HUGEPAGE is set:
>
> PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES
> -----------------------------------------------------------------------
> 4KiB | 10 | 9 | 4KiB * (2 ^ 9) = 2MiB
> 16Kib | 11 | 11 | 16KiB * (2 ^ 11) = 32MiB
> 64KiB | 13 | 13 | 64KiB * (2 ^ 13) = 512MiB
>
> There are some extreme cases for the CMA alignment requirement when:
>
> - CONFIG_ARCH_FORCE_MAX_ORDER maximum value is set
> - CONFIG_TRANSPARENT_HUGEPAGE is NOT set:
> - CONFIG_HUGETLB_PAGE is NOT set
>
> PAGE_SIZE | MAX_PAGE_ORDER | pageblock_order | CMA_MIN_ALIGNMENT_BYTES
> ------------------------------------------------------------------------
> 4KiB | 15 | 15 | 4KiB * (2 ^ 15) = 128MiB
> 16Kib | 13 | 13 | 16KiB * (2 ^ 13) = 128MiB
> 64KiB | 13 | 13 | 64KiB * (2 ^ 13) = 512MiB
>
> This affects the CMA reservations for the drivers. If a driver in a
> 4KiB kernel needs 4MiB of CMA memory, in a 16KiB kernel, the minimal
> reservation has to be 32MiB due to the alignment requirements:
>
> reserved-memory {
> ...
> cma_test_reserve: cma_test_reserve {
> compatible = "shared-dma-pool";
> size = <0x0 0x400000>; /* 4 MiB */
> ...
> };
> };
>
> reserved-memory {
> ...
> cma_test_reserve: cma_test_reserve {
> compatible = "shared-dma-pool";
> size = <0x0 0x2000000>; /* 32 MiB */
> ...
> };
> };
>
> Solution: Add a new config CONFIG_PAGE_BLOCK_ORDER that
> allows to set the page block order in all the architectures.
> The maximum page block order will be given by
> ARCH_FORCE_MAX_ORDER.
>
> By default, CONFIG_PAGE_BLOCK_ORDER will have the same
> value that ARCH_FORCE_MAX_ORDER. This will make sure that
> current kernel configurations won't be affected by this
> change. It is a opt-in change.
>
> This patch will allow to have the same CMA alignment
> requirements for large page sizes (16KiB, 64KiB) as that
> in 4kb kernels by setting a lower pageblock_order.
>
> Tests:
>
> - Verified that HugeTLB pages work when pageblock_order is 1, 7, 10
> on 4k and 16k kernels.
>
> - Verified that Transparent Huge Pages work when pageblock_order
> is 1, 7, 10 on 4k and 16k kernels.
>
> - Verified that dma-buf heaps allocations work when pageblock_order
> is 1, 7, 10 on 4k and 16k kernels.
>
> Benchmarks:
>
> The benchmarks compare 16kb kernels with pageblock_order 10 and 7. The
> reason for the pageblock_order 7 is because this value makes the min
> CMA alignment requirement the same as that in 4kb kernels (2MB).
>
> - Perform 100K dma-buf heaps (/dev/dma_heap/system) allocations of
> SZ_8M, SZ_4M, SZ_2M, SZ_1M, SZ_64, SZ_8, SZ_4. Use simpleperf
> (https://developer.android.com/ndk/guides/simpleperf) to measure
> the # of instructions and page-faults on 16k kernels.
> The benchmark was executed 10 times. The averages are below:
>
> # instructions | #page-faults
> order 10 | order 7 | order 10 | order 7
> --------------------------------------------------------
> 13,891,765,770 | 11,425,777,314 | 220 | 217
> 14,456,293,487 | 12,660,819,302 | 224 | 219
> 13,924,261,018 | 13,243,970,736 | 217 | 221
> 13,910,886,504 | 13,845,519,630 | 217 | 221
> 14,388,071,190 | 13,498,583,098 | 223 | 224
> 13,656,442,167 | 12,915,831,681 | 216 | 218
> 13,300,268,343 | 12,930,484,776 | 222 | 218
> 13,625,470,223 | 14,234,092,777 | 219 | 218
> 13,508,964,965 | 13,432,689,094 | 225 | 219
> 13,368,950,667 | 13,683,587,37 | 219 | 225
> -------------------------------------------------------------------
> 13,803,137,433 | 13,131,974,268 | 220 | 220 Averages
>
> There were 4.85% #instructions when order was 7, in comparison
> with order 10.
>
> 13,803,137,433 - 13,131,974,268 = -671,163,166 (-4.86%)
>
> The number of page faults in order 7 and 10 were the same.
>
> These results didn't show any significant regression when the
> pageblock_order is set to 7 on 16kb kernels.
>
> - Run speedometer 3.1 (https://browserbench.org/Speedometer3.1/) 5 times
> on the 16k kernels with pageblock_order 7 and 10.
>
> order 10 | order 7 | order 7 - order 10 | (order 7 - order 10) %
> -------------------------------------------------------------------
> 15.8 | 16.4 | 0.6 | 3.80%
> 16.4 | 16.2 | -0.2 | -1.22%
> 16.6 | 16.3 | -0.3 | -1.81%
> 16.8 | 16.3 | -0.5 | -2.98%
> 16.6 | 16.8 | 0.2 | 1.20%
> -------------------------------------------------------------------
> 16.44 16.4 -0.04 -0.24% Averages
>
> The results didn't show any significant regression when the
> pageblock_order is set to 7 on 16kb kernels.
>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Liam R. Howlett <Liam.Howlett@...cle.com>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> Cc: David Hildenbrand <david@...hat.com>
> CC: Mike Rapoport <rppt@...nel.org>
> Cc: Zi Yan <ziy@...dia.com>
> Cc: Suren Baghdasaryan <surenb@...gle.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Signed-off-by: Juan Yescas <jyescas@...gle.com>
> Acked-by: Zi Yan <ziy@...dia.com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Powered by blists - more mailing lists