lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Tue, 7 Apr 2020 09:40:43 +0900
From:   Joonsoo Kim <js1304@...il.com>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Hugh Dickins <hughd@...gle.com>,
        Minchan Kim <minchan@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v5 02/10] mm/vmscan: protect the workingset on anonymous LRU

2020년 4월 6일 (월) 오후 6:18, Hillf Danton <hdanton@...a.com>님이 작성:
>
>
> On Fri,  3 Apr 2020 14:40:40 +0900 Joonsoo Kim wrote:
> >
> > @@ -3093,11 +3093,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >       if (unlikely(page != swapcache && swapcache)) {
> >               page_add_new_anon_rmap(page, vma, vmf->address, false);
> >               mem_cgroup_commit_charge(page, memcg, false, false);
> > -             lru_cache_add_active_or_unevictable(page, vma);
> > +             lru_cache_add_inactive_or_unevictable(page, vma);
> >       } else {
> >               do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
> >               mem_cgroup_commit_charge(page, memcg, true, false);
> > -             activate_page(page);
> >       }
> >
> >       swap_free(entry);
> ...
> > @@ -996,8 +996,6 @@ static enum page_references page_check_references(struct page *page,
> >               return PAGEREF_RECLAIM;
> >
> >       if (referenced_ptes) {
> > -             if (PageSwapBacked(page))
> > -                     return PAGEREF_ACTIVATE;
> >               /*
> >                * All mapped pages start out with page table
> >                * references from the instantiating fault, so we need
> > @@ -1020,7 +1018,7 @@ static enum page_references page_check_references(struct page *page,
> >               /*
> >                * Activate file-backed executable pages after first usage.
> >                */
> > -             if (vm_flags & VM_EXEC)
> > +             if ((vm_flags & VM_EXEC) && !PageSwapBacked(page))
> >                       return PAGEREF_ACTIVATE;
> >
> >               return PAGEREF_KEEP;
> > --
> > 2.7.4
>
> Both changes other than
> s/lru_cache_add_active_or_unevictable/lru_cache_add_inactive_or_unevictable/
> are likely worth their own seperate commits with a concise log.

IMO, all of the changes in this patch provides just one logical change
for LRU management
on anonymous page so it's better to be together.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ