lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGM2rebWT43mTrpsbiwqpioNa=K68OQp=fstBmgov3tdkXjPiQ@mail.gmail.com>
Date:   Sat, 13 Oct 2018 12:58:03 -0400
From:   Pavel Tatashin <pasha.tatashin@...il.com>
To:     alexander.h.duyck@...ux.intel.com
Cc:     Linux Memory Management List <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Pasha Tatashin <pavel.tatashin@...rosoft.com>,
        Michal Hocko <mhocko@...e.com>, dave.jiang@...el.com,
        LKML <linux-kernel@...r.kernel.org>, willy@...radead.org,
        davem@...emloft.net, yi.z.zhang@...ux.intel.com,
        khalid.aziz@...cle.com, rppt@...ux.vnet.ibm.com,
        Vlastimil Babka <vbabka@...e.cz>, sparclinux@...r.kernel.org,
        dan.j.williams@...el.com, ldufour@...ux.vnet.ibm.com,
        mgorman@...hsingularity.net, mingo@...nel.org,
        kirill.shutemov@...ux.intel.com
Subject: Re: [mm PATCH v2 1/6] mm: Use mm_zero_struct_page from SPARC on all
 64b architectures

I am worried about this change. I added SPARC optimized
mm_zero_struct_page() specifically to SPARC because it has a poor
performance with small memset()s, since it uses STBI instructions.
However, other architectures might not suffer with small memset()s,
and have hardware optimized memset variants for small sizes. Don't
forget, this is a leaf routine on most arches, so the function call
should be cheap. Also, the macro itself is not very flexible: when
size of struct page is changed, it also must be modified (we could add
fall throughs though), I would add this macro only to those arches
that benefit from this change, in other words, I would like to see
performance data.

I will review the rest of the patches in this series on Monday.

Thank you,
Pavel
On Thu, Oct 11, 2018 at 6:17 PM Alexander Duyck
<alexander.h.duyck@...ux.intel.com> wrote:
>
> This change makes it so that we use the same approach that was already in
> use on Sparc on all the archtectures that support a 64b long.
>
> This is mostly motivated by the fact that 8 to 10 store/move instructions
> are likely always going to be faster than having to call into a function
> that is not specialized for handling page init.
>
> An added advantage to doing it this way is that the compiler can get away
> with combining writes in the __init_single_page call. As a result the
> memset call will be reduced to only about 4 write operations, or at least
> that is what I am seeing with GCC 6.2 as the flags, LRU poitners, and
> count/mapcount seem to be cancelling out at least 4 of the 8 assignments on
> my system.
>
> One change I had to make to the function was to reduce the minimum page
> size to 56 to support some powerpc64 configurations.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> ---
>  arch/sparc/include/asm/pgtable_64.h |   30 ------------------------------
>  include/linux/mm.h                  |   34 ++++++++++++++++++++++++++++++++++
>  2 files changed, 34 insertions(+), 30 deletions(-)
>
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 1393a8ac596b..22500c3be7a9 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -231,36 +231,6 @@
>  extern struct page *mem_map_zero;
>  #define ZERO_PAGE(vaddr)       (mem_map_zero)
>
> -/* This macro must be updated when the size of struct page grows above 80
> - * or reduces below 64.
> - * The idea that compiler optimizes out switch() statement, and only
> - * leaves clrx instructions
> - */
> -#define        mm_zero_struct_page(pp) do {                                    \
> -       unsigned long *_pp = (void *)(pp);                              \
> -                                                                       \
> -        /* Check that struct page is either 64, 72, or 80 bytes */     \
> -       BUILD_BUG_ON(sizeof(struct page) & 7);                          \
> -       BUILD_BUG_ON(sizeof(struct page) < 64);                         \
> -       BUILD_BUG_ON(sizeof(struct page) > 80);                         \
> -                                                                       \
> -       switch (sizeof(struct page)) {                                  \
> -       case 80:                                                        \
> -               _pp[9] = 0;     /* fallthrough */                       \
> -       case 72:                                                        \
> -               _pp[8] = 0;     /* fallthrough */                       \
> -       default:                                                        \
> -               _pp[7] = 0;                                             \
> -               _pp[6] = 0;                                             \
> -               _pp[5] = 0;                                             \
> -               _pp[4] = 0;                                             \
> -               _pp[3] = 0;                                             \
> -               _pp[2] = 0;                                             \
> -               _pp[1] = 0;                                             \
> -               _pp[0] = 0;                                             \
> -       }                                                               \
> -} while (0)
> -
>  /* PFNs are real physical page numbers.  However, mem_map only begins to record
>   * per-page information starting at pfn_base.  This is to handle systems where
>   * the first physical page in the machine is at some huge physical address,
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 273d4dbd3883..dee407998366 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -102,8 +102,42 @@ static inline void set_max_mapnr(unsigned long limit) { }
>   * zeroing by defining this macro in <asm/pgtable.h>.
>   */
>  #ifndef mm_zero_struct_page
> +#if BITS_PER_LONG == 64
> +/* This function must be updated when the size of struct page grows above 80
> + * or reduces below 64. The idea that compiler optimizes out switch()
> + * statement, and only leaves move/store instructions
> + */
> +#define        mm_zero_struct_page(pp) __mm_zero_struct_page(pp)
> +static inline void __mm_zero_struct_page(struct page *page)
> +{
> +       unsigned long *_pp = (void *)page;
> +
> +        /* Check that struct page is either 56, 64, 72, or 80 bytes */
> +       BUILD_BUG_ON(sizeof(struct page) & 7);
> +       BUILD_BUG_ON(sizeof(struct page) < 56);
> +       BUILD_BUG_ON(sizeof(struct page) > 80);
> +
> +       switch (sizeof(struct page)) {
> +       case 80:
> +               _pp[9] = 0;     /* fallthrough */
> +       case 72:
> +               _pp[8] = 0;     /* fallthrough */
> +       default:
> +               _pp[7] = 0;     /* fallthrough */
> +       case 56:
> +               _pp[6] = 0;
> +               _pp[5] = 0;
> +               _pp[4] = 0;
> +               _pp[3] = 0;
> +               _pp[2] = 0;
> +               _pp[1] = 0;
> +               _pp[0] = 0;
> +       }
> +}
> +#else
>  #define mm_zero_struct_page(pp)  ((void)memset((pp), 0, sizeof(struct page)))
>  #endif
> +#endif
>
>  /*
>   * Default maximum number of active map areas, this limits the number of vmas
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ