lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Apr 2022 14:57:15 -0600
From:   Yu Zhao <yuzhao@...gle.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Tejun Heo <tj@...nel.org>
Cc:     Stephen Rothwell <sfr@...hwell.id.au>,
        Linux-MM <linux-mm@...ck.org>, Andi Kleen <ak@...ux.intel.com>,
        Aneesh Kumar <aneesh.kumar@...ux.ibm.com>,
        Barry Song <21cnbao@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Hillf Danton <hdanton@...a.com>, Jens Axboe <axboe@...nel.dk>,
        Jesse Barnes <jsbarnes@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Jonathan Corbet <corbet@....net>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...e.de>,
        Michael Larabel <Michael@...haellarabel.com>,
        Michal Hocko <mhocko@...nel.org>,
        Mike Rapoport <rppt@...nel.org>,
        Rik van Riel <riel@...riel.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Will Deacon <will@...nel.org>,
        Ying Huang <ying.huang@...el.com>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Kernel Page Reclaim v2 <page-reclaim@...gle.com>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Brian Geffon <bgeffon@...gle.com>,
        Jan Alexander Steffens <heftig@...hlinux.org>,
        Oleksandr Natalenko <oleksandr@...alenko.name>,
        Steven Barrett <steven@...uorix.net>,
        Suleiman Souhlal <suleiman@...gle.com>,
        Daniel Byrne <djbyrne@....edu>,
        Donald Carr <d@...os-reins.com>,
        Holger Hoffstätte <holger@...lied-asynchrony.com>,
        Konstantin Kharlamov <Hi-Angel@...dex.ru>,
        Shuang Zhai <szhai2@...rochester.edu>,
        Sofia Trinh <sofia.trinh@....works>,
        Vaibhav Jain <vaibhav@...ux.ibm.com>
Subject: Re: [PATCH v10 10/14] mm: multi-gen LRU: kill switch

On Mon, Apr 11, 2022 at 8:16 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Wed,  6 Apr 2022 21:15:22 -0600 Yu Zhao <yuzhao@...gle.com> wrote:
>
> > Add /sys/kernel/mm/lru_gen/enabled as a kill switch. Components that
> > can be disabled include:
> >   0x0001: the multi-gen LRU core
> >   0x0002: walking page table, when arch_has_hw_pte_young() returns
> >           true
> >   0x0004: clearing the accessed bit in non-leaf PMD entries, when
> >           CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y
> >   [yYnN]: apply to all the components above
> > E.g.,
> >   echo y >/sys/kernel/mm/lru_gen/enabled
> >   cat /sys/kernel/mm/lru_gen/enabled
> >   0x0007
> >   echo 5 >/sys/kernel/mm/lru_gen/enabled
> >   cat /sys/kernel/mm/lru_gen/enabled
> >   0x0005
>
> I'm shocked that this actually works.  How does it work?  Existing
> pages & folios are drained over time or synchrnously?

Basically we have a double-throw way, and once flipped, new (isolated)
pages can only be added to the lists of the current implementation.
Existing pages on the lists of the previous implementation are
synchronously drained (isolated and then re-added), with
cond_resched() of course.

> Supporting
> structures remain allocated, available for reenablement?

Correct.

> Why is it thought necessary to have this?  Is it expected to be
> permanent?

This is almost a must for large scale deployments/experiments.

For deployments, we need to keep fix rollout (high priority) and
feature enabling (low priority) separate. Rolling out multiple
binaries works but will make the process slower and more painful. So
generally for each release, there is only one binary to roll out, and
unless it's impossible, new features are disabled by default. Once a
rollout completes, i.e., reaches enough population and remains stable,
new features are turned on gradually. If something goes wrong with a
new feature, we turn off that feature rather than roll back the
kernel.

Similarly, for A/B experiments, we don't want to use two binaries.


> > NB: the page table walks happen on the scale of seconds under heavy
> > memory pressure, in which case the mmap_lock contention is a lesser
> > concern, compared with the LRU lock contention and the I/O congestion.
> > So far the only well-known case of the mmap_lock contention happens on
> > Android, due to Scudo [1] which allocates several thousand VMAs for
> > merely a few hundred MBs. The SPF and the Maple Tree also have
> > provided their own assessments [2][3]. However, if walking page tables
> > does worsen the mmap_lock contention, the kill switch can be used to
> > disable it. In this case the multi-gen LRU will suffer a minor
> > performance degradation, as shown previously.
> >
> > Clearing the accessed bit in non-leaf PMD entries can also be
> > disabled, since this behavior was not tested on x86 varieties other
> > than Intel and AMD.
> >
> > ...
> >
> > --- a/include/linux/cgroup.h
> > +++ b/include/linux/cgroup.h
> > @@ -432,6 +432,18 @@ static inline void cgroup_put(struct cgroup *cgrp)
> >       css_put(&cgrp->self);
> >  }
> >
> > +extern struct mutex cgroup_mutex;
> > +
> > +static inline void cgroup_lock(void)
> > +{
> > +     mutex_lock(&cgroup_mutex);
> > +}
> > +
> > +static inline void cgroup_unlock(void)
> > +{
> > +     mutex_unlock(&cgroup_mutex);
> > +}
>
> It's a tad rude to export mutex_lock like this without (apparently)
> informing its owner (Tejun).

Looping in Tejun.

> And if we're going to wrap its operations via helper fuctions then
>
> - presumably all cgroup_mutex operations should be wrapped and
>
> - exiting open-coded operations on this mutex should be converted.

I wrapped cgroup_mutex here because I'm not a big fan of #ifdefs
(CONFIG_CGROUPs). Internally for cgroup code, it seems superfluous to
me to use these wrappers, e.g., for developers who work on cgroup
code, they might not be interested in looking up these wrappers.

> > +static bool drain_evictable(struct lruvec *lruvec)
> > +{
> > +     int gen, type, zone;
> > +     int remaining = MAX_LRU_BATCH;
> > +
> > +     for_each_gen_type_zone(gen, type, zone) {
> > +             struct list_head *head = &lruvec->lrugen.lists[gen][type][zone];
> > +
> > +             while (!list_empty(head)) {
> > +                     bool success;
> > +                     struct folio *folio = lru_to_folio(head);
> > +
> > +                     VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio);
> > +                     VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
> > +                     VM_BUG_ON_FOLIO(folio_is_file_lru(folio) != type, folio);
> > +                     VM_BUG_ON_FOLIO(folio_zonenum(folio) != zone, folio);
>
> So many new BUG_ONs to upset Linus :(

I'll replace them with VM_WARN_ON_ONCE_FOLIO(), based on the previous
discussion.

> > +                     success = lru_gen_del_folio(lruvec, folio, false);
> > +                     VM_BUG_ON(!success);
> > +                     lruvec_add_folio(lruvec, folio);
> > +
> > +                     if (!--remaining)
> > +                             return false;
> > +             }
> > +     }
> > +
> > +     return true;
> > +}
> > +
> >
> > ...
> >
> > +static ssize_t store_enable(struct kobject *kobj, struct kobj_attribute *attr,
> > +                         const char *buf, size_t len)
> > +{
> > +     int i;
> > +     unsigned int caps;
> > +
> > +     if (tolower(*buf) == 'n')
> > +             caps = 0;
> > +     else if (tolower(*buf) == 'y')
> > +             caps = -1;
> > +     else if (kstrtouint(buf, 0, &caps))
> > +             return -EINVAL;
>
> See kstrtobool()

`caps` is not a boolean, hence the plural and the below.

> > +     for (i = 0; i < NR_LRU_GEN_CAPS; i++) {
> > +             bool enable = caps & BIT(i);
> > +
> > +             if (i == LRU_GEN_CORE)
> > +                     lru_gen_change_state(enable);
> > +             else if (enable)
> > +                     static_branch_enable(&lru_gen_caps[i]);
> > +             else
> > +                     static_branch_disable(&lru_gen_caps[i]);
> > +     }
> > +
> > +     return len;
> > +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ