lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Wed, 12 Feb 2020 20:00:13 +0900
From:   Joonsoo Kim <js1304@...il.com>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Hugh Dickins <hughd@...gle.com>,
        Minchan Kim <minchan@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH 9/9] mm/swap: count a new anonymous page as a
 reclaim_state's rotate

Hello,

2020년 2월 12일 (수) 오후 12:35, Hillf Danton <hdanton@...a.com>님이 작성:
>
>
> On Mon, 10 Feb 2020 22:20:37 -0800 (PST)
> > From: Joonsoo Kim <iamjoonsoo.kim@....com>
> >
> > reclaim_stat's rotate is used for controlling the ratio of scanning page
> > between file and anonymous LRU. All new anonymous pages are counted
> > for rotate before the patch, protecting anonymous pages on active LRU, and,
> > it makes that reclaim on anonymous LRU is less happened than file LRU.
> >
> > Now, situation is changed. all new anonymous pages are not added
> > to the active LRU so rotate would be far less than before. It will cause
> > that reclaim on anonymous LRU happens more and it would result in bad
> > effect on some system that is optimized for previous setting.
> >
> > Therefore, this patch counts a new anonymous page as a reclaim_state's
> > rotate. Although it is non-logical to add this count to
> > the reclaim_state's rotate in current algorithm, reducing the regression
> > would be more important.
> >
> > I found this regression on kernel-build test and it is roughly 2~5%
> > performance degradation. With this workaround, performance is completely
> > restored.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
> > ---
> >  mm/swap.c | 27 ++++++++++++++++++++++++++-
> >  1 file changed, 26 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/swap.c b/mm/swap.c
> > index 18b2735..c3584af 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -187,6 +187,9 @@ int get_kernel_page(unsigned long start, int write, struct page **pages)
> >  }
> >  EXPORT_SYMBOL_GPL(get_kernel_page);
> >
> > +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
> > +                              void *arg);
> > +
> >  static void pagevec_lru_move_fn(struct pagevec *pvec,
> >       void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg),
> >       void *arg)
> > @@ -207,6 +210,19 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
> >                       spin_lock_irqsave(&pgdat->lru_lock, flags);
> >               }
> >
> > +             if (move_fn == __pagevec_lru_add_fn) {
> > +                     struct list_head *entry = &page->lru;
> > +                     unsigned long next = (unsigned long)entry->next;
> > +                     unsigned long rotate = next & 2;
> > +
> > +                     if (rotate) {
> > +                             VM_BUG_ON(arg);
> > +
> > +                             next = next & ~2;
> > +                             entry->next = (struct list_head *)next;
> > +                             arg = (void *)rotate;
> > +                     }
> > +             }
> >               lruvec = mem_cgroup_page_lruvec(page, pgdat);
> >               (*move_fn)(page, lruvec, arg);
> >       }
> > @@ -475,6 +491,14 @@ void lru_cache_add_inactive_or_unevictable(struct page *page,
> >                                   hpage_nr_pages(page));
> >               count_vm_event(UNEVICTABLE_PGMLOCKED);
> >       }
> > +
> > +     if (PageSwapBacked(page) && evictable) {
> > +             struct list_head *entry = &page->lru;
> > +             unsigned long next = (unsigned long)entry->next;
> > +
> > +             next = next | 2;
> > +             entry->next = (struct list_head *)next;
> > +     }
> >       lru_cache_add(page);
> >  }
> >
> > @@ -927,6 +951,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
> >  {
> >       enum lru_list lru;
> >       int was_unevictable = TestClearPageUnevictable(page);
> > +     unsigned long rotate = (unsigned long)arg;
> >
> >       VM_BUG_ON_PAGE(PageLRU(page), page);
> >
> > @@ -962,7 +987,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
> >       if (page_evictable(page)) {
> >               lru = page_lru(page);
> >               update_page_reclaim_stat(lruvec, page_is_file_cache(page),
> > -                                      PageActive(page));
> > +                                      PageActive(page) | rotate);
>
>
> Is it likely to rotate a page if we know it's not active?
>
>                 update_page_reclaim_stat(lruvec, page_is_file_cache(page),
> -                                        PageActive(page));
> +                                        PageActive(page) ||
> +                                        !page_is_file_cache(page));
>

My intention is that only newly created anonymous pages contributes
the rotate count.
With your code suggestion, other case for anonymous pages could also contributes
the rotate count since  __pagevec_lru_add_fn() is used else where.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ