lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufa1tm7FXUhiW0NtzhBZ_-qcr-drM1BY-HWrT6Odmnc17w@mail.gmail.com>
Date:   Sun, 18 Sep 2022 02:17:16 -0600
From:   Yu Zhao <yuzhao@...gle.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Andi Kleen <ak@...ux.intel.com>,
        Aneesh Kumar <aneesh.kumar@...ux.ibm.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Hillf Danton <hdanton@...a.com>, Jens Axboe <axboe@...nel.dk>,
        Johannes Weiner <hannes@...xchg.org>,
        Jonathan Corbet <corbet@....net>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...e.de>,
        Michael Larabel <Michael@...haellarabel.com>,
        Michal Hocko <mhocko@...nel.org>,
        Mike Rapoport <rppt@...nel.org>, Tejun Heo <tj@...nel.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Will Deacon <will@...nel.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Kernel Page Reclaim v2 <page-reclaim@...gle.com>,
        Brian Geffon <bgeffon@...gle.com>,
        Jan Alexander Steffens <heftig@...hlinux.org>,
        Oleksandr Natalenko <oleksandr@...alenko.name>,
        Steven Barrett <steven@...uorix.net>,
        Suleiman Souhlal <suleiman@...gle.com>,
        Daniel Byrne <djbyrne@....edu>,
        Donald Carr <d@...os-reins.com>,
        Holger Hoffstätte <holger@...lied-asynchrony.com>,
        Konstantin Kharlamov <Hi-Angel@...dex.ru>,
        Shuang Zhai <szhai2@...rochester.edu>,
        Sofia Trinh <sofia.trinh@....works>,
        Vaibhav Jain <vaibhav@...ux.ibm.com>
Subject: Re: [PATCH mm-unstable v15 08/14] mm: multi-gen LRU: support page
 table walks

On Sun, Sep 18, 2022 at 2:01 AM Yu Zhao <yuzhao@...gle.com> wrote:

...

> This patch uses the following optimizations when walking page tables:
> 1. It tracks the usage of mm_struct's between context switches so that
>    page table walkers can skip processes that have been sleeping since
>    the last iteration.

...

> @@ -672,6 +672,22 @@ struct mm_struct {
>                  */
>                 unsigned long ksm_merging_pages;
>  #endif
> +#ifdef CONFIG_LRU_GEN
> +               struct {
> +                       /* this mm_struct is on lru_gen_mm_list */
> +                       struct list_head list;
> +                       /*
> +                        * Set when switching to this mm_struct, as a hint of
> +                        * whether it has been used since the last time per-node
> +                        * page table walkers cleared the corresponding bits.
> +                        */
> +                       unsigned long bitmap;

...

> +static inline void lru_gen_use_mm(struct mm_struct *mm)
> +{
> +       /*
> +        * When the bitmap is set, page reclaim knows this mm_struct has been
> +        * used since the last time it cleared the bitmap. So it might be worth
> +        * walking the page tables of this mm_struct to clear the accessed bit.
> +        */
> +       WRITE_ONCE(mm->lru_gen.bitmap, -1);
> +}

...

> @@ -5180,6 +5180,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
>                  * finish_task_switch()'s mmdrop().
>                  */
>                 switch_mm_irqs_off(prev->active_mm, next->mm, next);
> +               lru_gen_use_mm(next->mm);
>
>                 if (!prev->mm) {                        // from kernel
>                         /* will mmdrop() in finish_task_switch(). */

Adding Ingo, Peter, Juri and Vincent for the bit above, per previous
discussion here:
https://lore.kernel.org/r/CAOUHufY91Eju-g1+xbUsGkGZ-cwBm78v+S_Air7Cp8mAnYJVYA@mail.gmail.com/

I trimmed 99% of this patch to save your time. In case you want to
hear the whole story:
https://lore.kernel.org/r/20220918080010.2920238-9-yuzhao@google.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ