[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231220051855.47547-1-sunnanyong@huawei.com>
Date: Wed, 20 Dec 2023 13:18:52 +0800
From: Nanyong Sun <sunnanyong@...wei.com>
To: <catalin.marinas@....com>, <will@...nel.org>, <mike.kravetz@...cle.com>,
<muchun.song@...ux.dev>, <akpm@...ux-foundation.org>,
<anshuman.khandual@....com>
CC: <willy@...radead.org>, <wangkefeng.wang@...wei.com>,
<sunnanyong@...wei.com>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: [PATCH v2 0/3] A Solution to Re-enable hugetlb vmemmap optimize
HVO was previously disabled on arm64 [1] due to the lack of necessary
BBM(break-before-make) logic when changing page tables.
This set of patches fix this by adding necessary BBM sequence when
changing page table, and supporting vmemmap page fault handling to
fixup kernel address fault if vmemmap is concurrently accessed.
I have tested this patch set with concurrently accessing the vmemmap
address when do BBM and can recover by vmemmap fault handler. Also
tested under the config of 2/3/4 pgtable levels with 4K/64K page size
and all works well.
V2:
This version mainly changes some naming, and uses more appropriate helper
functions to make the code more clean, according to review comments from
Muchun Song and Kefeng Wang.
[1] commit 060a2c92d1b6 ("arm64: mm: hugetlb: Disable HUGETLB_PAGE_OPTIMIZE_VMEMMAP")
Nanyong Sun (3):
mm: HVO: introduce helper function to update and flush pgtable
arm64: mm: HVO: support BBM of vmemmap pgtable safely
arm64: mm: Re-enable OPTIMIZE_HUGETLB_VMEMMAP
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/esr.h | 4 ++
arch/arm64/include/asm/mmu.h | 20 +++++++++
arch/arm64/mm/fault.c | 78 ++++++++++++++++++++++++++++++++++--
arch/arm64/mm/mmu.c | 28 +++++++++++++
mm/hugetlb_vmemmap.c | 55 +++++++++++++++++++------
6 files changed, 171 insertions(+), 15 deletions(-)
--
2.25.1
Powered by blists - more mailing lists