lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 14 Jun 2022 01:23:52 -0600 From: Yu Zhao <yuzhao@...gle.com> To: Andrew Morton <akpm@...ux-foundation.org> Cc: Andi Kleen <ak@...ux.intel.com>, Aneesh Kumar <aneesh.kumar@...ux.ibm.com>, Catalin Marinas <catalin.marinas@....com>, Dave Hansen <dave.hansen@...ux.intel.com>, Hillf Danton <hdanton@...a.com>, Jens Axboe <axboe@...nel.dk>, Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>, Linus Torvalds <torvalds@...ux-foundation.org>, Matthew Wilcox <willy@...radead.org>, Mel Gorman <mgorman@...e.de>, Michael Larabel <Michael@...haellarabel.com>, Michal Hocko <mhocko@...nel.org>, Mike Rapoport <rppt@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Tejun Heo <tj@...nel.org>, Vlastimil Babka <vbabka@...e.cz>, Will Deacon <will@...nel.org>, linux-arm-kernel@...ts.infradead.org, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org, page-reclaim@...gle.com, Brian Geffon <bgeffon@...gle.com>, Jan Alexander Steffens <heftig@...hlinux.org>, Oleksandr Natalenko <oleksandr@...alenko.name>, Steven Barrett <steven@...uorix.net>, Suleiman Souhlal <suleiman@...gle.com>, Daniel Byrne <djbyrne@....edu>, Donald Carr <d@...os-reins.com>, Holger Hoffstätte <holger@...lied-asynchrony.com>, Konstantin Kharlamov <Hi-Angel@...dex.ru>, Shuang Zhai <szhai2@...rochester.edu>, Sofia Trinh <sofia.trinh@....works>, Vaibhav Jain <vaibhav@...ux.ibm.com> Subject: Re: [PATCH v12 08/14] mm: multi-gen LRU: support page table walks On Tue, Jun 14, 2022 at 01:16:45AM -0600, Yu Zhao wrote: > +static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk *args, > + unsigned long *vm_start, unsigned long *vm_end) > +{ > + unsigned long start = round_up(*vm_end, size); > + unsigned long end = (start | ~mask) + 1; > + > + VM_WARN_ON_ONCE(mask & size); > + VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); > + > + while (args->vma) { > + if (start >= args->vma->vm_end) { > + args->vma = args->vma->vm_next; > + continue; > + } > + > + if (end && end <= args->vma->vm_start) > + return false; > + > + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { > + args->vma = args->vma->vm_next; > + continue; > + } > + > + *vm_start = max(start, args->vma->vm_start); > + *vm_end = min(end - 1, args->vma->vm_end - 1) + 1; > + > + return true; > + } > + > + return false; > +} Andrew, The above function has a conflict with Maple Tree. Please use the following fix-up if you apply MGLRU on top of Maple Tree. Thanks. diff --git a/mm/vmscan.c b/mm/vmscan.c index 69a52aae1e03..05e62948e365 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3749,23 +3749,14 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk { unsigned long start = round_up(*vm_end, size); unsigned long end = (start | ~mask) + 1; + VMA_ITERATOR(vmi, args->mm, start); VM_WARN_ON_ONCE(mask & size); VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask)); - while (args->vma) { - if (start >= args->vma->vm_end) { - args->vma = args->vma->vm_next; + for_each_vma_range(vmi, args->vma, end) { + if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) continue; - } - - if (end && end <= args->vma->vm_start) - return false; - - if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) { - args->vma = args->vma->vm_next; - continue; - } *vm_start = max(start, args->vma->vm_start); *vm_end = min(end - 1, args->vma->vm_end - 1) + 1;
Powered by blists - more mailing lists