lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+EHjTyW+LP=UmwDP+egbPzpz2vxFpbOMgXi=dOt15j8wfLxWg@mail.gmail.com>
Date:   Fri, 18 Jun 2021 09:59:09 +0100
From:   Fuad Tabba <tabba@...gle.com>
To:     "wangyanan (Y)" <wangyanan55@...wei.com>
Cc:     Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
        kvm@...r.kernel.org, Catalin Marinas <catalin.marinas@....com>,
        linux-kernel@...r.kernel.org, kvmarm@...ts.cs.columbia.edu,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v7 1/4] KVM: arm64: Introduce two cache maintenance callbacks

Hi,

On Fri, Jun 18, 2021 at 2:52 AM wangyanan (Y) <wangyanan55@...wei.com> wrote:
>
>
>
> On 2021/6/17 22:20, Marc Zyngier wrote:
> > On Thu, 17 Jun 2021 13:38:37 +0100,
> > Will Deacon <will@...nel.org> wrote:
> >> On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
> >>> To prepare for performing CMOs for guest stage-2 in the fault handlers
> >>> in pgtable.c, here introduce two cache maintenance callbacks in struct
> >>> kvm_pgtable_mm_ops. We also adjust the comment alignment for the
> >>> existing part but make no real content change at all.
> >>>
> >>> Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
> >>> ---
> >>>   arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
> >>>   1 file changed, 25 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> >>> index c3674c47d48c..b6ce34aa44bb 100644
> >>> --- a/arch/arm64/include/asm/kvm_pgtable.h
> >>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> >>> @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
> >>>
> >>>   /**
> >>>    * struct kvm_pgtable_mm_ops - Memory management callbacks.
> >>> - * @zalloc_page:   Allocate a single zeroed memory page. The @arg parameter
> >>> - *                 can be used by the walker to pass a memcache. The
> >>> - *                 initial refcount of the page is 1.
> >>> - * @zalloc_pages_exact:    Allocate an exact number of zeroed memory pages. The
> >>> - *                 @size parameter is in bytes, and is rounded-up to the
> >>> - *                 next page boundary. The resulting allocation is
> >>> - *                 physically contiguous.
> >>> - * @free_pages_exact:      Free an exact number of memory pages previously
> >>> - *                 allocated by zalloc_pages_exact.
> >>> - * @get_page:              Increment the refcount on a page.
> >>> - * @put_page:              Decrement the refcount on a page. When the refcount
> >>> - *                 reaches 0 the page is automatically freed.
> >>> - * @page_count:            Return the refcount of a page.
> >>> - * @phys_to_virt:  Convert a physical address into a virtual address mapped
> >>> - *                 in the current context.
> >>> - * @virt_to_phys:  Convert a virtual address mapped in the current context
> >>> - *                 into a physical address.
> >>> + * @zalloc_page:           Allocate a single zeroed memory page.
> >>> + *                         The @arg parameter can be used by the walker
> >>> + *                         to pass a memcache. The initial refcount of
> >>> + *                         the page is 1.
> >>> + * @zalloc_pages_exact:            Allocate an exact number of zeroed memory pages.
> >>> + *                         The @size parameter is in bytes, and is rounded
> >>> + *                         up to the next page boundary. The resulting
> >>> + *                         allocation is physically contiguous.
> >>> + * @free_pages_exact:              Free an exact number of memory pages previously
> >>> + *                         allocated by zalloc_pages_exact.
> >>> + * @get_page:                      Increment the refcount on a page.
> >>> + * @put_page:                      Decrement the refcount on a page. When the
> >>> + *                         refcount reaches 0 the page is automatically
> >>> + *                         freed.
> >>> + * @page_count:                    Return the refcount of a page.
> >>> + * @phys_to_virt:          Convert a physical address into a virtual address
> >>> + *                         mapped in the current context.
> >>> + * @virt_to_phys:          Convert a virtual address mapped in the current
> >>> + *                         context into a physical address.
> >>> + * @clean_invalidate_dcache:       Clean and invalidate the data cache for the
> >>> + *                         specified memory address range.
> >> This should probably be explicit about whether this to the PoU/PoC/PoP.
> > Indeed. I can fix that locally if there is nothing else that requires
> > adjusting.
> Will be grateful !

Sorry, I missed the v7 update. One comment here is that the naming
used in the patch series I mentioned shortens invalidate to inval (if
you want it to be less of a mouthful):
https://lore.kernel.org/linux-arm-kernel/20210524083001.2586635-19-tabba@google.com/

Otherwise:
Reviewed-by: Fuad Tabba <tabba@...gle.com>

Thanks!
/fuad



>
> Thanks,
> Yanan
> .
> >
> >       M.
> >
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@...ts.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ