[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c0956f0-494e-5c6b-bdc2-d4213afd5e2f@redhat.com>
Date: Wed, 14 Apr 2021 17:58:26 +0200
From: David Hildenbrand <david@...hat.com>
To: Anshuman Khandual <anshuman.khandual@....com>,
Mike Rapoport <rppt@...nel.org>,
linux-arm-kernel@...ts.infradead.org
Cc: Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Will Deacon <will@...nel.org>, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC/RFT PATCH 2/3] arm64: decouple check whether pfn is normal
memory from pfn_valid()
On 08.04.21 07:14, Anshuman Khandual wrote:
>
> On 4/7/21 10:56 PM, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@...ux.ibm.com>
>>
>> The intended semantics of pfn_valid() is to verify whether there is a
>> struct page for the pfn in question and nothing else.
>
> Should there be a comment affirming this semantics interpretation, above the
> generic pfn_valid() in include/linux/mmzone.h ?
>
>>
>> Yet, on arm64 it is used to distinguish memory areas that are mapped in the
>> linear map vs those that require ioremap() to access them.
>>
>> Introduce a dedicated pfn_is_memory() to perform such check and use it
>> where appropriate.
>>
>> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
>> ---
>> arch/arm64/include/asm/memory.h | 2 +-
>> arch/arm64/include/asm/page.h | 1 +
>> arch/arm64/kvm/mmu.c | 2 +-
>> arch/arm64/mm/init.c | 6 ++++++
>> arch/arm64/mm/ioremap.c | 4 ++--
>> arch/arm64/mm/mmu.c | 2 +-
>> 6 files changed, 12 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 0aabc3be9a75..7e77fdf71b9d 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -351,7 +351,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>>
>> #define virt_addr_valid(addr) ({ \
>> __typeof__(addr) __addr = __tag_reset(addr); \
>> - __is_lm_address(__addr) && pfn_valid(virt_to_pfn(__addr)); \
>> + __is_lm_address(__addr) && pfn_is_memory(virt_to_pfn(__addr)); \
>> })
>>
>> void dump_mem_limit(void);
>> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
>> index 012cffc574e8..32b485bcc6ff 100644
>> --- a/arch/arm64/include/asm/page.h
>> +++ b/arch/arm64/include/asm/page.h
>> @@ -38,6 +38,7 @@ void copy_highpage(struct page *to, struct page *from);
>> typedef struct page *pgtable_t;
>>
>> extern int pfn_valid(unsigned long);
>> +extern int pfn_is_memory(unsigned long);
>>
>> #include <asm/memory.h>
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 8711894db8c2..ad2ea65a3937 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -85,7 +85,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
>>
>> static bool kvm_is_device_pfn(unsigned long pfn)
>> {
>> - return !pfn_valid(pfn);
>> + return !pfn_is_memory(pfn);
>> }
>>
>> /*
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 3685e12aba9b..258b1905ed4a 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -258,6 +258,12 @@ int pfn_valid(unsigned long pfn)
>> }
>> EXPORT_SYMBOL(pfn_valid);
>>
>> +int pfn_is_memory(unsigned long pfn)
>> +{
>> + return memblock_is_map_memory(PFN_PHYS(pfn));
>> +}
>> +EXPORT_SYMBOL(pfn_is_memory);> +
>
> Should not this be generic though ? There is nothing platform or arm64
> specific in here. Wondering as pfn_is_memory() just indicates that the
> pfn is linear mapped, should not it be renamed as pfn_is_linear_memory()
> instead ? Regardless, it's fine either way.
TBH, I dislike (generic) pfn_is_memory(). It feels like we're mixing
concepts. NOMAP memory vs !NOMAP memory; even NOMAP is some kind of
memory after all. pfn_is_map_memory() would be more expressive, although
still sub-optimal.
We'd actually want some kind of arm64-specific pfn_is_system_memory() or
the inverse pfn_is_device_memory() -- to be improved.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists