[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fae235d5-78b6-87aa-ed3f-1a908d61abf4@c-s.fr>
Date: Thu, 6 Feb 2020 07:18:02 +0100
From: Christophe Leroy <christophe.leroy@....fr>
To: Leonardo Bras <leonardo@...ux.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Arnd Bergmann <arnd@...db.de>,
Andrew Morton <akpm@...ux-foundation.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Nicholas Piggin <npiggin@...il.com>,
Steven Price <steven.price@....com>,
Robin Murphy <robin.murphy@....com>,
Mahesh Salgaonkar <mahesh@...ux.vnet.ibm.com>,
Balbir Singh <bsingharora@...il.com>,
Reza Arbab <arbab@...ux.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Allison Randal <allison@...utok.net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Michal Suchanek <msuchanek@...e.de>
Cc: linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
kvm-ppc@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v6 07/11] powerpc/kvm/e500: Use functions to track
lockless pgtbl walks
Le 06/02/2020 à 04:08, Leonardo Bras a écrit :
> Applies the new functions used for tracking lockless pgtable walks on
> kvmppc_e500_shadow_map().
>
> Fixes the place where local_irq_restore() is called: previously, if ptep
> was NULL, local_irq_restore() would never be called.
>
> local_irq_{save,restore} is already inside {begin,end}_lockless_pgtbl_walk,
> so there is no need to repeat it here.
>
> Variable that saves the irq mask was renamed from flags to irq_mask so it
> doesn't lose meaning now it's not directly passed to local_irq_* functions.
>
> Signed-off-by: Leonardo Bras <leonardo@...ux.ibm.com>
> ---
> arch/powerpc/kvm/e500_mmu_host.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
> index 425d13806645..3dcf11f77256 100644
> --- a/arch/powerpc/kvm/e500_mmu_host.c
> +++ b/arch/powerpc/kvm/e500_mmu_host.c
> @@ -336,7 +336,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> pte_t *ptep;
> unsigned int wimg = 0;
> pgd_t *pgdir;
> - unsigned long flags;
> + unsigned long irq_mask;
>
> /* used to check for invalidations in progress */
> mmu_seq = kvm->mmu_notifier_seq;
> @@ -473,7 +473,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> * We are holding kvm->mmu_lock so a notifier invalidate
> * can't run hence pfn won't change.
> */
> - local_irq_save(flags);
> + irq_mask = begin_lockless_pgtbl_walk();
> ptep = find_linux_pte(pgdir, hva, NULL, NULL);
> if (ptep) {
> pte_t pte = READ_ONCE(*ptep);
> @@ -481,15 +481,16 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
> if (pte_present(pte)) {
> wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) &
> MAS2_WIMGE_MASK;
> - local_irq_restore(flags);
> } else {
> - local_irq_restore(flags);
> + end_lockless_pgtbl_walk(irq_mask);
> pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n",
> __func__, (long)gfn, pfn);
> ret = -EINVAL;
> goto out;
> }
> }
> + end_lockless_pgtbl_walk(irq_mask);
> +
I don't really like unbalanced begin/end.
Something like the following would be cleaner:
begin_lockless_pgtbl_walk()
ptep = find()
if (ptep) {
pte = READ_ONCE()
if (pte_present(pte))
wing=
else
ret = -EINVAL;
}
end_lockless_pgtbl_walk()
if (ret) {
pr_err_rate...()
goto out;
}
> kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
>
> kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
>
Christophe
Powered by blists - more mailing lists