[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 8 Jul 2022 01:02:37 +0800
From: "guanghui.fgh" <guanghuifeng@...ux.alibaba.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Mike Rapoport <rppt@...nel.org>, Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>,
baolin.wang@...ux.alibaba.com, akpm@...ux-foundation.org,
david@...hat.com, jianyong.wu@....com, james.morse@....com,
quic_qiancai@...cinc.com, christophe.leroy@...roup.eu,
jonathan@...ek.ca, mark.rutland@....com,
thunder.leizhen@...wei.com, anshuman.khandual@....com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
geert+renesas@...der.be, linux-mm@...ck.org,
yaohongbo@...ux.alibaba.com, alikernel-developer@...ux.alibaba.com
Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance
degradation
Thanks.
在 2022/7/6 23:40, Catalin Marinas 写道:
> On Wed, Jul 06, 2022 at 11:18:22PM +0800, guanghui.fgh wrote:
>> 在 2022/7/6 21:54, Mike Rapoport 写道:
>>> One thing I can think of is to only remap the crash kernel memory if it is
>>> a part of an allocation that exactly fits into one ore more PUDs.
>>>
>>> Say, in reserve_crashkernel() we try the memblock_phys_alloc() with
>>> PUD_SIZE as alignment and size rounded up to PUD_SIZE. If this allocation
>>> succeeds, we remap the entire area that now contains only memory allocated
>>> in reserve_crashkernel() and free the extra memory after remapping is done.
>>> If the large allocation fails, we fall back to the original size and
>>> alignment and don't allow unmapping crash kernel memory in
>>> arch_kexec_protect_crashkres().
>>
>> There is a new method.
>> I think we should use the patch v3(similar but need add some changes)
>>
>> 1.We can walk crashkernle block/section pagetable,
>> [[[(keep the origin block/section mapping valid]]]
>> rebuild the pte level page mapping for the crashkernel mem
>> rebuild left & right margin mem(which is in same block/section mapping but
>> out of crashkernel mem) with block/section mapping
>>
>> 2.'replace' the origin block/section mapping by new builded mapping
>> iterately
>>
>> With this method, all the mem mapping keep valid all the time.
>
> As I already commented on one of your previous patches, this is not
> allowed by the architecture. If FEAT_BBM is implemented (ARMv8.4 I
> think), the worst that can happen is a TLB conflict abort and the
> handler should invalidate the TLBs and restart the faulting instruction,
> assuming the handler won't try to access the same conflicting virtual
> address. Prior to FEAT_BBM, that's not possible as the architecture does
> not describe a precise behaviour of conflicting TLB entries (you might
> as well get the TLB output of multiple entries being or'ed together).
>
I think there is another way to handle it.
1.We can rebuild the crashkernel mem mapping firstly,
but [[[don't change the origin linear mapping]]].
2.Afterward, we can reuse the idmap_pg_dir and switch to it.
We use idmap_pg_dir to change the linear mapping which complyes with the
TLB BBM.
Powered by blists - more mailing lists