lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsWtCLIG2qKETqmq@arm.com>
Date:   Wed, 6 Jul 2022 16:40:56 +0100
From:   Catalin Marinas <catalin.marinas@....com>
To:     "guanghui.fgh" <guanghuifeng@...ux.alibaba.com>
Cc:     Mike Rapoport <rppt@...nel.org>, Will Deacon <will@...nel.org>,
        Ard Biesheuvel <ardb@...nel.org>,
        baolin.wang@...ux.alibaba.com, akpm@...ux-foundation.org,
        david@...hat.com, jianyong.wu@....com, james.morse@....com,
        quic_qiancai@...cinc.com, christophe.leroy@...roup.eu,
        jonathan@...ek.ca, mark.rutland@....com,
        thunder.leizhen@...wei.com, anshuman.khandual@....com,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        geert+renesas@...der.be, linux-mm@...ck.org,
        yaohongbo@...ux.alibaba.com, alikernel-developer@...ux.alibaba.com
Subject: Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance
 degradation

On Wed, Jul 06, 2022 at 11:18:22PM +0800, guanghui.fgh wrote:
> 在 2022/7/6 21:54, Mike Rapoport 写道:
> > One thing I can think of is to only remap the crash kernel memory if it is
> > a part of an allocation that exactly fits into one ore more PUDs.
> > 
> > Say, in reserve_crashkernel() we try the memblock_phys_alloc() with
> > PUD_SIZE as alignment and size rounded up to PUD_SIZE. If this allocation
> > succeeds, we remap the entire area that now contains only memory allocated
> > in reserve_crashkernel() and free the extra memory after remapping is done.
> > If the large allocation fails, we fall back to the original size and
> > alignment and don't allow unmapping crash kernel memory in
> > arch_kexec_protect_crashkres().
> 
> There is a new method.
> I think we should use the patch v3(similar but need add some changes)
> 
> 1.We can walk crashkernle block/section pagetable,
> [[[(keep the origin block/section mapping valid]]]
> rebuild the pte level page mapping for the crashkernel mem
> rebuild left & right margin mem(which is in same block/section mapping but
> out of crashkernel mem) with block/section mapping
> 
> 2.'replace' the origin block/section mapping by new builded mapping
> iterately
> 
> With this method, all the mem mapping keep valid all the time.

As I already commented on one of your previous patches, this is not
allowed by the architecture. If FEAT_BBM is implemented (ARMv8.4 I
think), the worst that can happen is a TLB conflict abort and the
handler should invalidate the TLBs and restart the faulting instruction,
assuming the handler won't try to access the same conflicting virtual
address. Prior to FEAT_BBM, that's not possible as the architecture does
not describe a precise behaviour of conflicting TLB entries (you might
as well get the TLB output of multiple entries being or'ed together).

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ