[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c1b9376-3997-aa7b-d5f3-b04da985c260@huawei.com>
Date: Fri, 18 Jun 2021 09:52:36 +0800
From: "wangyanan (Y)" <wangyanan55@...wei.com>
To: Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>
CC: Quentin Perret <qperret@...gle.com>,
Alexandru Elisei <alexandru.elisei@....com>,
<kvmarm@...ts.cs.columbia.edu>,
<linux-arm-kernel@...ts.infradead.org>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>,
Catalin Marinas <catalin.marinas@....com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Gavin Shan <gshan@...hat.com>, <wanghaibin.wang@...wei.com>,
<zhukeqian1@...wei.com>, <yuzenghui@...wei.com>
Subject: Re: [PATCH v7 1/4] KVM: arm64: Introduce two cache maintenance
callbacks
On 2021/6/17 22:20, Marc Zyngier wrote:
> On Thu, 17 Jun 2021 13:38:37 +0100,
> Will Deacon <will@...nel.org> wrote:
>> On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
>>> To prepare for performing CMOs for guest stage-2 in the fault handlers
>>> in pgtable.c, here introduce two cache maintenance callbacks in struct
>>> kvm_pgtable_mm_ops. We also adjust the comment alignment for the
>>> existing part but make no real content change at all.
>>>
>>> Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
>>> ---
>>> arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
>>> 1 file changed, 25 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
>>> index c3674c47d48c..b6ce34aa44bb 100644
>>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>>> @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
>>>
>>> /**
>>> * struct kvm_pgtable_mm_ops - Memory management callbacks.
>>> - * @zalloc_page: Allocate a single zeroed memory page. The @arg parameter
>>> - * can be used by the walker to pass a memcache. The
>>> - * initial refcount of the page is 1.
>>> - * @zalloc_pages_exact: Allocate an exact number of zeroed memory pages. The
>>> - * @size parameter is in bytes, and is rounded-up to the
>>> - * next page boundary. The resulting allocation is
>>> - * physically contiguous.
>>> - * @free_pages_exact: Free an exact number of memory pages previously
>>> - * allocated by zalloc_pages_exact.
>>> - * @get_page: Increment the refcount on a page.
>>> - * @put_page: Decrement the refcount on a page. When the refcount
>>> - * reaches 0 the page is automatically freed.
>>> - * @page_count: Return the refcount of a page.
>>> - * @phys_to_virt: Convert a physical address into a virtual address mapped
>>> - * in the current context.
>>> - * @virt_to_phys: Convert a virtual address mapped in the current context
>>> - * into a physical address.
>>> + * @zalloc_page: Allocate a single zeroed memory page.
>>> + * The @arg parameter can be used by the walker
>>> + * to pass a memcache. The initial refcount of
>>> + * the page is 1.
>>> + * @zalloc_pages_exact: Allocate an exact number of zeroed memory pages.
>>> + * The @size parameter is in bytes, and is rounded
>>> + * up to the next page boundary. The resulting
>>> + * allocation is physically contiguous.
>>> + * @free_pages_exact: Free an exact number of memory pages previously
>>> + * allocated by zalloc_pages_exact.
>>> + * @get_page: Increment the refcount on a page.
>>> + * @put_page: Decrement the refcount on a page. When the
>>> + * refcount reaches 0 the page is automatically
>>> + * freed.
>>> + * @page_count: Return the refcount of a page.
>>> + * @phys_to_virt: Convert a physical address into a virtual address
>>> + * mapped in the current context.
>>> + * @virt_to_phys: Convert a virtual address mapped in the current
>>> + * context into a physical address.
>>> + * @clean_invalidate_dcache: Clean and invalidate the data cache for the
>>> + * specified memory address range.
>> This should probably be explicit about whether this to the PoU/PoC/PoP.
> Indeed. I can fix that locally if there is nothing else that requires
> adjusting.
Will be grateful !
Thanks,
Yanan
.
>
> M.
>
Powered by blists - more mailing lists