[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <874lu7r6qi.fsf@skywalker.in.ibm.com>
Date: Thu, 20 Jul 2017 11:26:53 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To: Ram Pai <linuxram@...ibm.com>, linuxppc-dev@...ts.ozlabs.org,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
linux-mm@...ck.org, x86@...nel.org, linux-doc@...r.kernel.org,
linux-kselftest@...r.kernel.org
Cc: benh@...nel.crashing.org, paulus@...ba.org, mpe@...erman.id.au,
khandual@...ux.vnet.ibm.com, bsingharora@...il.com,
dave.hansen@...el.com, hbabu@...ibm.com, linuxram@...ibm.com,
arnd@...db.de, akpm@...ux-foundation.org, corbet@....net,
mingo@...hat.com, mhocko@...nel.org
Subject: Re: [RFC v6 03/62] powerpc: introduce pte_set_hash_slot() helper
Ram Pai <linuxram@...ibm.com> writes:
> Introduce pte_set_hash_slot().It sets the (H_PAGE_F_SECOND|H_PAGE_F_GIX)
> bits at the appropriate location in the PTE of 4K PTE. For
> 64K PTE, it sets the bits in the second part of the PTE. Though
> the implementation for the former just needs the slot parameter, it does
> take some additional parameters to keep the prototype consistent.
>
> This function will be handy as we work towards re-arranging the
> bits in the later patches.
>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com>
> Signed-off-by: Ram Pai <linuxram@...ibm.com>
> ---
> arch/powerpc/include/asm/book3s/64/hash-4k.h | 15 +++++++++++++++
> arch/powerpc/include/asm/book3s/64/hash-64k.h | 25 +++++++++++++++++++++++++
> 2 files changed, 40 insertions(+), 0 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
> index d2cf949..dc153c6 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
> @@ -53,6 +53,21 @@ static inline int hash__hugepd_ok(hugepd_t hpd)
> }
> #endif
>
> +/*
> + * 4k pte format is different from 64k pte format. Saving the
> + * hash_slot is just a matter of returning the pte bits that need to
> + * be modified. On 64k pte, things are a little more involved and
> + * hence needs many more parameters to accomplish the same.
> + * However we want to abstract this out from the caller by keeping
> + * the prototype consistent across the two formats.
> + */
> +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
> + unsigned int subpg_index, unsigned long slot)
> +{
> + return (slot << H_PAGE_F_GIX_SHIFT) &
> + (H_PAGE_F_SECOND | H_PAGE_F_GIX);
> +}
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>
> static inline char *get_hpte_slot_array(pmd_t *pmdp)
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> index c281f18..89ef5a9 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> @@ -67,6 +67,31 @@ static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
> return ((rpte.hidx >> (index<<2)) & 0xfUL);
> }
>
> +/*
> + * Commit the hash slot and return pte bits that needs to be modified.
> + * The caller is expected to modify the pte bits accordingly and
> + * commit the pte to memory.
> + */
> +static inline unsigned long pte_set_hash_slot(pte_t *ptep, real_pte_t rpte,
> + unsigned int subpg_index, unsigned long slot)
> +{
> + unsigned long *hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
> +
> + rpte.hidx &= ~(0xfUL << (subpg_index << 2));
> + *hidxp = rpte.hidx | (slot << (subpg_index << 2));
> + /*
> + * Commit the hidx bits to memory before returning.
> + * Anyone reading pte must ensure hidx bits are
> + * read only after reading the pte by using the
> + * read-side barrier smp_rmb(). __real_pte() can
> + * help ensure that.
> + */
> + smp_wmb();
> +
> + /* no pte bits to be modified, return 0x0UL */
> + return 0x0UL;
> +}
> +
> #define __rpte_to_pte(r) ((r).pte)
> extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
> /*
> --
> 1.7.1
Powered by blists - more mailing lists