lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 1 Mar 2022 09:03:06 +0100 From: David Hildenbrand <david@...hat.com> To: Alex Sierra <alex.sierra@....com>, jgg@...dia.com Cc: Felix.Kuehling@....com, linux-mm@...ck.org, rcampbell@...dia.com, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org, amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org, hch@....de, jglisse@...hat.com, apopple@...dia.com, willy@...radead.org, akpm@...ux-foundation.org Subject: Re: [PATCH] mm: split vm_normal_pages for LRU and non-LRU handling On 28.02.22 21:34, Alex Sierra wrote: > DEVICE_COHERENT pages introduce a subtle distinction in the way > "normal" pages can be used by various callers throughout the kernel. > They behave like normal pages for purposes of mapping in CPU page > tables, and for COW. But they do not support LRU lists, NUMA > migration or THP. Therefore we split vm_normal_page into two > functions vm_normal_any_page and vm_normal_lru_page. The latter will > only return pages that can be put on an LRU list and that support > NUMA migration and THP. Why not s/vm_normal_any_page/vm_normal_page/ and avoid code churn? > > We also introduced a FOLL_LRU flag that adds the same behaviour to > follow_page and related APIs, to allow callers to specify that they > expect to put pages on an LRU list. [...] > -#define FOLL_WRITE 0x01 /* check pte is writable */ > -#define FOLL_TOUCH 0x02 /* mark page accessed */ > -#define FOLL_GET 0x04 /* do get_page on page */ > -#define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ > -#define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ > -#define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO > - * and return without waiting upon it */ > -#define FOLL_POPULATE 0x40 /* fault in pages (with FOLL_MLOCK) */ > -#define FOLL_NOFAULT 0x80 /* do not fault in pages */ > -#define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ > -#define FOLL_NUMA 0x200 /* force NUMA hinting page fault */ > -#define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ > -#define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ > -#define FOLL_MLOCK 0x1000 /* lock present pages */ > -#define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ > -#define FOLL_COW 0x4000 /* internal GUP flag */ > -#define FOLL_ANON 0x8000 /* don't do file mappings */ > -#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ > -#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ > -#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ > -#define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ > +#define FOLL_WRITE 0x01 /* check pte is writable */ > +#define FOLL_TOUCH 0x02 /* mark page accessed */ > +#define FOLL_GET 0x04 /* do get_page on page */ > +#define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ > +#define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ > +#define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO > + * and return without waiting upon it */ > +#define FOLL_POPULATE 0x40 /* fault in pages (with FOLL_MLOCK) */ > +#define FOLL_NOFAULT 0x80 /* do not fault in pages */ > +#define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ > +#define FOLL_NUMA 0x200 /* force NUMA hinting page fault */ > +#define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ > +#define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ > +#define FOLL_MLOCK 0x1000 /* lock present pages */ > +#define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ > +#define FOLL_COW 0x4000 /* internal GUP flag */ > +#define FOLL_ANON 0x8000 /* don't do file mappings */ > +#define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ > +#define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ > +#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ > +#define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ > +#define FOLL_LRU 0x100000 /* return only LRU (anon or page cache) */ > Can we minimize code churn, please? > if (PageReserved(page)) > diff --git a/mm/migrate.c b/mm/migrate.c > index c31d04b46a5e..17d049311b78 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1614,7 +1614,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, > goto out; > > /* FOLL_DUMP to ignore special (like zero) pages */ > - follflags = FOLL_GET | FOLL_DUMP; > + follflags = FOLL_GET | FOLL_DUMP | FOLL_LRU; > page = follow_page(vma, addr, follflags); Why wouldn't we want to dump DEVICE_COHERENT pages? This looks wrong. -- Thanks, David / dhildenb
Powered by blists - more mailing lists