[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb28b2fb-1480-4db0-a7e6-792716421f3d@redhat.com>
Date: Fri, 24 Nov 2023 20:56:59 +0100
From: David Hildenbrand <david@...hat.com>
To: Alexandru Elisei <alexandru.elisei@....com>,
catalin.marinas@....com, will@...nel.org, oliver.upton@...ux.dev,
maz@...nel.org, james.morse@....com, suzuki.poulose@....com,
yuzenghui@...wei.com, arnd@...db.de, akpm@...ux-foundation.org,
mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, mhiramat@...nel.org,
rppt@...nel.org, hughd@...gle.com
Cc: pcc@...gle.com, steven.price@....com, anshuman.khandual@....com,
vincenzo.frascino@....com, eugenis@...gle.com, kcc@...gle.com,
hyesoo.yu@...sung.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, kvmarm@...ts.linux.dev,
linux-fsdevel@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH RFC v2 15/27] arm64: mte: Check that tag storage blocks
are in the same zone
On 19.11.23 17:57, Alexandru Elisei wrote:
> alloc_contig_range() requires that the requested pages are in the same
> zone. Check that this is indeed the case before initializing the tag
> storage blocks.
>
> Signed-off-by: Alexandru Elisei <alexandru.elisei@....com>
> ---
> arch/arm64/kernel/mte_tag_storage.c | 33 +++++++++++++++++++++++++++++
> 1 file changed, 33 insertions(+)
>
> diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c
> index 8b9bedf7575d..fd63430d4dc0 100644
> --- a/arch/arm64/kernel/mte_tag_storage.c
> +++ b/arch/arm64/kernel/mte_tag_storage.c
> @@ -265,6 +265,35 @@ void __init mte_tag_storage_init(void)
> }
> }
>
> +/* alloc_contig_range() requires all pages to be in the same zone. */
> +static int __init mte_tag_storage_check_zone(void)
> +{
> + struct range *tag_range;
> + struct zone *zone;
> + unsigned long pfn;
> + u32 block_size;
> + int i, j;
> +
> + for (i = 0; i < num_tag_regions; i++) {
> + block_size = tag_regions[i].block_size;
> + if (block_size == 1)
> + continue;
> +
> + tag_range = &tag_regions[i].tag_range;
> + for (pfn = tag_range->start; pfn <= tag_range->end; pfn += block_size) {
> + zone = page_zone(pfn_to_page(pfn));
> + for (j = 1; j < block_size; j++) {
> + if (page_zone(pfn_to_page(pfn + j)) != zone) {
> + pr_err("Tag storage block pages in different zones");
> + return -EINVAL;
> + }
> + }
> + }
> + }
> +
> + return 0;
> +}
> +
Looks like something that ordinary CMA provides. See cma_activate_area().
Can't we find a way to let CMA do CMA thingies and only be a user of
that? What would be required to make the performance issue you spelled
out in the cover letter be gone and not have to open-code that in arch code?
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists