lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 17 Jul 2020 14:44:14 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Alex Shi <alex.shi@...ux.alibaba.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Tejun Heo <tj@...nel.org>, Hugh Dickins <hughd@...gle.com>,
        Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        Daniel Jordan <daniel.m.jordan@...cle.com>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        Matthew Wilcox <willy@...radead.org>,
        Johannes Weiner <hannes@...xchg.org>,
        kbuild test robot <lkp@...el.com>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, cgroups@...r.kernel.org,
        Shakeel Butt <shakeelb@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Wei Yang <richard.weiyang@...il.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru

On Fri, Jul 10, 2020 at 5:59 PM Alex Shi <alex.shi@...ux.alibaba.com> wrote:
>
> From: Hugh Dickins <hughd@...gle.com>
>
> Use the relock function to replace relocking action. And try to save few
> lock times.
>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> Signed-off-by: Alex Shi <alex.shi@...ux.alibaba.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Andrey Ryabinin <aryabinin@...tuozzo.com>
> Cc: Jann Horn <jannh@...gle.com>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Matthew Wilcox <willy@...radead.org>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: cgroups@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-mm@...ck.org
> ---
>  mm/vmscan.c | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bdb53a678e7e..078a1640ec60 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1854,15 +1854,15 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>         enum lru_list lru;
>
>         while (!list_empty(list)) {
> -               struct lruvec *new_lruvec = NULL;
> -
>                 page = lru_to_page(list);
>                 VM_BUG_ON_PAGE(PageLRU(page), page);
>                 list_del(&page->lru);
>                 if (unlikely(!page_evictable(page))) {
> -                       spin_unlock_irq(&lruvec->lru_lock);
> +                       if (lruvec) {
> +                               spin_unlock_irq(&lruvec->lru_lock);
> +                               lruvec = NULL;
> +                       }
>                         putback_lru_page(page);
> -                       spin_lock_irq(&lruvec->lru_lock);
>                         continue;
>                 }
>
> @@ -1876,12 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>                  *                                        list_add(&page->lru,)
>                  *     list_add(&page->lru,) //corrupt
>                  */
> -               new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
> -               if (new_lruvec != lruvec) {
> -                       if (lruvec)
> -                               spin_unlock_irq(&lruvec->lru_lock);
> -                       lruvec = lock_page_lruvec_irq(page);
> -               }
> +               lruvec = relock_page_lruvec_irq(page, lruvec);
>                 SetPageLRU(page);
>
>                 if (unlikely(put_page_testzero(page))) {
> @@ -1890,8 +1885,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>
>                         if (unlikely(PageCompound(page))) {
>                                 spin_unlock_irq(&lruvec->lru_lock);
> +                               lruvec = NULL;
>                                 destroy_compound_page(page);
> -                               spin_lock_irq(&lruvec->lru_lock);
>                         } else
>                                 list_add(&page->lru, &pages_to_free);
>

It seems like this should just be rolled into patch 19. Otherwise if
you are wanting to consider it as a "further optimization" type patch
you might pull some of the optimizations you were pushing in patch 18
into this patch as well and just call it out as adding relocks where
there previously were none.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ