[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <601f24a0-cb55-458e-aa15-3970f2290172@redhat.com>
Date: Mon, 30 Oct 2023 18:24:34 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Marc Zyngier <maz@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Huacai Chen <chenhuacai@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Anup Patel <anup@...infault.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: kvm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev, linux-mips@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, kvm-riscv@...ts.infradead.org,
linux-riscv@...ts.infradead.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Xiaoyao Li <xiaoyao.li@...el.com>,
Xu Yilun <yilun.xu@...el.com>,
Chao Peng <chao.p.peng@...ux.intel.com>,
Fuad Tabba <tabba@...gle.com>,
Jarkko Sakkinen <jarkko@...nel.org>,
Anish Moorthy <amoorthy@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
Isaku Yamahata <isaku.yamahata@...el.com>,
Mickaël Salaün <mic@...ikod.net>,
Vlastimil Babka <vbabka@...e.cz>,
Vishal Annapurve <vannapurve@...gle.com>,
Ackerley Tng <ackerleytng@...gle.com>,
Maciej Szmigiero <mail@...iej.szmigiero.name>,
David Hildenbrand <david@...hat.com>,
Quentin Perret <qperret@...gle.com>,
Michael Roth <michael.roth@....com>,
Wang <wei.w.wang@...el.com>,
Liam Merwick <liam.merwick@...cle.com>,
Isaku Yamahata <isaku.yamahata@...il.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v13 14/35] mm: Add AS_UNMOVABLE to mark mapping as
completely unmovable
On 10/27/23 20:21, Sean Christopherson wrote:
> Add an "unmovable" flag for mappings that cannot be migrated under any
> circumstance. KVM will use the flag for its upcoming GUEST_MEMFD support,
> which will not support compaction/migration, at least not in the
> foreseeable future.
>
> Test AS_UNMOVABLE under folio lock as already done for the async
> compaction/dirty folio case, as the mapping can be removed by truncation
> while compaction is running. To avoid having to lock every folio with a
> mapping, assume/require that unmovable mappings are also unevictable, and
> have mapping_set_unmovable() also set AS_UNEVICTABLE.
>
> Cc: Matthew Wilcox <willy@...radead.org>
> Co-developed-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
I think this could even be "From: Vlastimil", but no biggie.
Paolo
> ---
> include/linux/pagemap.h | 19 +++++++++++++++++-
> mm/compaction.c | 43 +++++++++++++++++++++++++++++------------
> mm/migrate.c | 2 ++
> 3 files changed, 51 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 351c3b7f93a1..82c9bf506b79 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -203,7 +203,8 @@ enum mapping_flags {
> /* writeback related tags are not used */
> AS_NO_WRITEBACK_TAGS = 5,
> AS_LARGE_FOLIO_SUPPORT = 6,
> - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */
> + AS_RELEASE_ALWAYS = 7, /* Call ->release_folio(), even if no private data */
> + AS_UNMOVABLE = 8, /* The mapping cannot be moved, ever */
> };
>
> /**
> @@ -289,6 +290,22 @@ static inline void mapping_clear_release_always(struct address_space *mapping)
> clear_bit(AS_RELEASE_ALWAYS, &mapping->flags);
> }
>
> +static inline void mapping_set_unmovable(struct address_space *mapping)
> +{
> + /*
> + * It's expected unmovable mappings are also unevictable. Compaction
> + * migrate scanner (isolate_migratepages_block()) relies on this to
> + * reduce page locking.
> + */
> + set_bit(AS_UNEVICTABLE, &mapping->flags);
> + set_bit(AS_UNMOVABLE, &mapping->flags);
> +}
> +
> +static inline bool mapping_unmovable(struct address_space *mapping)
> +{
> + return test_bit(AS_UNMOVABLE, &mapping->flags);
> +}
> +
> static inline gfp_t mapping_gfp_mask(struct address_space * mapping)
> {
> return mapping->gfp_mask;
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 38c8d216c6a3..12b828aed7c8 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -883,6 +883,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>
> /* Time to isolate some pages for migration */
> for (; low_pfn < end_pfn; low_pfn++) {
> + bool is_dirty, is_unevictable;
>
> if (skip_on_failure && low_pfn >= next_skip_pfn) {
> /*
> @@ -1080,8 +1081,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> if (!folio_test_lru(folio))
> goto isolate_fail_put;
>
> + is_unevictable = folio_test_unevictable(folio);
> +
> /* Compaction might skip unevictable pages but CMA takes them */
> - if (!(mode & ISOLATE_UNEVICTABLE) && folio_test_unevictable(folio))
> + if (!(mode & ISOLATE_UNEVICTABLE) && is_unevictable)
> goto isolate_fail_put;
>
> /*
> @@ -1093,26 +1096,42 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_writeback(folio))
> goto isolate_fail_put;
>
> - if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_dirty(folio)) {
> - bool migrate_dirty;
> + is_dirty = folio_test_dirty(folio);
> +
> + if (((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) ||
> + (mapping && is_unevictable)) {
> + bool migrate_dirty = true;
> + bool is_unmovable;
>
> /*
> * Only folios without mappings or that have
> - * a ->migrate_folio callback are possible to
> - * migrate without blocking. However, we may
> - * be racing with truncation, which can free
> - * the mapping. Truncation holds the folio lock
> - * until after the folio is removed from the page
> - * cache so holding it ourselves is sufficient.
> + * a ->migrate_folio callback are possible to migrate
> + * without blocking.
> + *
> + * Folios from unmovable mappings are not migratable.
> + *
> + * However, we can be racing with truncation, which can
> + * free the mapping that we need to check. Truncation
> + * holds the folio lock until after the folio is removed
> + * from the page so holding it ourselves is sufficient.
> + *
> + * To avoid locking the folio just to check unmovable,
> + * assume every unmovable folio is also unevictable,
> + * which is a cheaper test. If our assumption goes
> + * wrong, it's not a correctness bug, just potentially
> + * wasted cycles.
> */
> if (!folio_trylock(folio))
> goto isolate_fail_put;
>
> mapping = folio_mapping(folio);
> - migrate_dirty = !mapping ||
> - mapping->a_ops->migrate_folio;
> + if ((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) {
> + migrate_dirty = !mapping ||
> + mapping->a_ops->migrate_folio;
> + }
> + is_unmovable = mapping && mapping_unmovable(mapping);
> folio_unlock(folio);
> - if (!migrate_dirty)
> + if (!migrate_dirty || is_unmovable)
> goto isolate_fail_put;
> }
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 2053b54556ca..ed874e43ecd7 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -956,6 +956,8 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
>
> if (!mapping)
> rc = migrate_folio(mapping, dst, src, mode);
> + else if (mapping_unmovable(mapping))
> + rc = -EOPNOTSUPP;
> else if (mapping->a_ops->migrate_folio)
> /*
> * Most folios have a mapping and most filesystems
Powered by blists - more mailing lists