[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <90505ef2-9250-d791-e05d-dbcb7672e4c4@nvidia.com>
Date: Mon, 10 Apr 2023 19:48:16 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Ard Biesheuvel <ardb@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Mark Rutland <mark.rutland@....com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Feiyang Chen <chenfeiyang@...ngson.cn>,
Alistair Popple <apopple@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
<linux-arm-kernel@...ts.infradead.org>,
LKML <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<stable@...r.kernel.org>
Subject: Re: [PATCH] arm64/mm: don't WARN when alloc/free-ing device private
pages
On 4/10/23 00:39, John Hubbard wrote:
>> pfn_to_page(x) for values 0xc00_0000 < x < 0x1000_0000 will produce a
>> kernel VA that points outside the region set aside for the vmemmap.
>> This region is currently unused, but that will likely change soon.
>>
>
> I tentatively think I'm in this case right now. Because there is no wrap
> around happening in my particular config, which is CONFIG_ARM64_VA_BITS
> == 48, and PAGE_SIZE == 4KB and sizeof(struct page) == 64 (details
> below).
>
Correction, actually it *is* wrapping around, and ending up as a bogus
user space address, as you said it would when being above the range:
page_to_pfn(0xffffffffaee00000): 0x0000000ffec38000
> It occurs to me that ZONE_DEVICE and (within that category) device
> private page support need only support rather large setups. On x86, it
> requires 64-bit. And on arm64, from what I'm learning after a day or so
> of looking around and comparing, I think we must require at least 48 bit
> VA support. Otherwise there's just no room for things.
I'm still not sure of how to make room, but working on it.
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists