[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0ae7609-aca4-4497-9188-bb09e96e7768@gmail.com>
Date: Mon, 9 Dec 2024 18:35:13 +0100
From: Klara Modin <klarasmodin@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>, akpm@...ux-foundation.org
Cc: willy@...radead.org, liam.howlett@...cle.com, lorenzo.stoakes@...cle.com,
mhocko@...e.com, vbabka@...e.cz, hannes@...xchg.org, mjguzik@...il.com,
oliver.sang@...el.com, mgorman@...hsingularity.net, david@...hat.com,
peterx@...hat.com, oleg@...hat.com, dave@...olabs.net, paulmck@...nel.org,
brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com, hughd@...gle.com,
minchan@...gle.com, jannh@...gle.com, shakeel.butt@...ux.dev,
souravpanda@...gle.com, pasha.tatashin@...een.com, corbet@....net,
linux-doc@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v5 4/6] mm: make vma cache SLAB_TYPESAFE_BY_RCU
Hi,
On 2024-12-06 23:52, Suren Baghdasaryan wrote:
> To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that
> object reuse before RCU grace period is over will be detected inside
> lock_vma_under_rcu().
> lock_vma_under_rcu() enters RCU read section, finds the vma at the
> given address, locks the vma and checks if it got detached or remapped
> to cover a different address range. These last checks are there
> to ensure that the vma was not modified after we found it but before
> locking it.
> vma reuse introduces several new possibilities:
> 1. vma can be reused after it was found but before it is locked;
> 2. vma can be reused and reinitialized (including changing its vm_mm)
> while being locked in vma_start_read();
> 3. vma can be reused and reinitialized after it was found but before
> it is locked, then attached at a new address or to a new mm while
> read-locked;
> For case #1 current checks will help detecting cases when:
> - vma was reused but not yet added into the tree (detached check)
> - vma was reused at a different address range (address check);
> We are missing the check for vm_mm to ensure the reused vma was not
> attached to a different mm. This patch adds the missing check.
> For case #2, we pass mm to vma_start_read() to prevent access to
> unstable vma->vm_mm. This might lead to vma_start_read() returning
> a false locked result but that's not critical if it's rare because
> it will only lead to a retry under mmap_lock.
> For case #3, we ensure the order in which vma->detached flag and
> vm_start/vm_end/vm_mm are set and checked. vma gets attached after
> vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check
> vma->detached before checking vm_start/vm_end/vm_mm. This is required
> because attaching vma happens without vma write-lock, as opposed to
> vma detaching, which requires vma write-lock. This patch adds memory
> barriers inside is_vma_detached() and vma_mark_attached() needed to
> order reads and writes to vma->detached vs vm_start/vm_end/vm_mm.
> After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep.
> This will facilitate vm_area_struct reuse and will minimize the number
> of call_rcu() calls.
This patch (85ad413389aec04cfaaba043caa8128b76c6e491 in next-20241209)
seems to cause an oops on a MIPS board of mine (Cavium Octeon III)
(abbreviated, full attached):
CPU 2 Unable to handle kernel paging request at virtual address
0000000000000000, epc == ffffffff813a85a0, ra == ffffffff81390438
Oops[#1]:
CPU: 2 UID: 0 PID: 1 Comm: init Not tainted
6.13.0-rc1-00162-g85ad413389ae #156
Call Trace:
unlink_anon_vmas (mm/rmap.c:408)
free_pgtables (mm/memory.c:393)
vms_clear_ptes (mm/vma.c:1143)
vms_complete_munmap_vmas (include/linux/mm.h:2737 mm/vma.c:1187)
do_vmi_align_munmap (mm/vma.c:1452)
__vm_munmap (mm/vma.c:2892)
sys_munmap (mm/mmap.c:1053)
syscall_common (arch/mips/kernel/scall64-n64.S:62)
I saw that there's already a report, but maybe another arch can be
useful for tracking this down.
Please let me know if there's anything else you need.
Regards,
Klara Modin
Link: https://lore.kernel.org/all/202412082208.db1fb2c9-lkp@intel.com
>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> ---
> include/linux/mm.h | 36 +++++--
> include/linux/mm_types.h | 10 +-
> include/linux/slab.h | 6 --
> kernel/fork.c | 157 +++++++++++++++++++++++++------
> mm/memory.c | 15 ++-
> mm/vma.c | 2 +-
> tools/testing/vma/vma_internal.h | 7 +-
> 7 files changed, 179 insertions(+), 54 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2bf38c1e9cca..3568bcbc7c81 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -257,7 +257,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *);
> struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
> void vm_area_free(struct vm_area_struct *);
> /* Use only if VMA has no other users */
> -void __vm_area_free(struct vm_area_struct *vma);
> +void vm_area_free_unreachable(struct vm_area_struct *vma);
>
> #ifndef CONFIG_MMU
> extern struct rb_root nommu_region_tree;
> @@ -706,8 +706,10 @@ static inline void vma_lock_init(struct vm_area_struct *vma)
> * Try to read-lock a vma. The function is allowed to occasionally yield false
> * locked result to avoid performance overhead, in which case we fall back to
> * using mmap_lock. The function should never yield false unlocked result.
> + * False locked result is possible if mm_lock_seq overflows or if vma gets
> + * reused and attached to a different mm before we lock it.
> */
> -static inline bool vma_start_read(struct vm_area_struct *vma)
> +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma)
> {
> /*
> * Check before locking. A race might cause false locked result.
> @@ -716,7 +718,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> * we don't rely on for anything - the mm_lock_seq read against which we
> * need ordering is below.
> */
> - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence))
> + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence))
> return false;
>
> if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0))
> @@ -733,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
> * after it has been unlocked.
> * This pairs with RELEASE semantics in vma_end_write_all().
> */
> - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) {
> + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) {
> up_read(&vma->vm_lock.lock);
> return false;
> }
> @@ -822,7 +824,15 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
>
> static inline void vma_mark_attached(struct vm_area_struct *vma)
> {
> - vma->detached = false;
> + /*
> + * This pairs with smp_rmb() inside is_vma_detached().
> + * vma is marked attached after all vma modifications are done and it
> + * got added into the vma tree. All prior vma modifications should be
> + * made visible before marking the vma attached.
> + */
> + smp_wmb();
> + /* This pairs with READ_ONCE() in is_vma_detached(). */
> + WRITE_ONCE(vma->detached, false);
> }
>
> static inline void vma_mark_detached(struct vm_area_struct *vma)
> @@ -834,7 +844,18 @@ static inline void vma_mark_detached(struct vm_area_struct *vma)
>
> static inline bool is_vma_detached(struct vm_area_struct *vma)
> {
> - return vma->detached;
> + bool detached;
> +
> + /* This pairs with WRITE_ONCE() in vma_mark_attached(). */
> + detached = READ_ONCE(vma->detached);
> + /*
> + * This pairs with smp_wmb() inside vma_mark_attached() to ensure
> + * vma->detached is read before vma attributes read later inside
> + * lock_vma_under_rcu().
> + */
> + smp_rmb();
> +
> + return detached;
> }
>
> static inline void release_fault_lock(struct vm_fault *vmf)
> @@ -859,7 +880,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> #else /* CONFIG_PER_VMA_LOCK */
>
> static inline void vma_lock_init(struct vm_area_struct *vma) {}
> -static inline bool vma_start_read(struct vm_area_struct *vma)
> +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma)
> { return false; }
> static inline void vma_end_read(struct vm_area_struct *vma) {}
> static inline void vma_start_write(struct vm_area_struct *vma) {}
> @@ -893,6 +914,7 @@ static inline void assert_fault_locked(struct vm_fault *vmf)
>
> extern const struct vm_operations_struct vma_dummy_vm_ops;
>
> +/* Use on VMAs not created using vm_area_alloc() */
> static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
> {
> memset(vma, 0, sizeof(*vma));
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index be3551654325..5d8779997266 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -543,6 +543,12 @@ static inline void *folio_get_private(struct folio *folio)
>
> typedef unsigned long vm_flags_t;
>
> +/*
> + * freeptr_t represents a SLUB freelist pointer, which might be encoded
> + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled.
> + */
> +typedef struct { unsigned long v; } freeptr_t;
> +
> /*
> * A region containing a mapping of a non-memory backed file under NOMMU
> * conditions. These are held in a global tree and are pinned by the VMAs that
> @@ -657,9 +663,7 @@ struct vm_area_struct {
> unsigned long vm_start;
> unsigned long vm_end;
> };
> -#ifdef CONFIG_PER_VMA_LOCK
> - struct rcu_head vm_rcu; /* Used for deferred freeing. */
> -#endif
> + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */
> };
>
> /*
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 10a971c2bde3..681b685b6c4e 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -234,12 +234,6 @@ enum _slab_flag_bits {
> #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED
> #endif
>
> -/*
> - * freeptr_t represents a SLUB freelist pointer, which might be encoded
> - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled.
> - */
> -typedef struct { unsigned long v; } freeptr_t;
> -
> /*
> * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
> *
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 71990f46aa4e..e7e76a660e4c 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -436,6 +436,98 @@ static struct kmem_cache *vm_area_cachep;
> /* SLAB cache for mm_struct structures (tsk->mm) */
> static struct kmem_cache *mm_cachep;
>
> +static void vm_area_ctor(void *data)
> +{
> + struct vm_area_struct *vma = (struct vm_area_struct *)data;
> +
> +#ifdef CONFIG_PER_VMA_LOCK
> + /* vma is not locked, can't use vma_mark_detached() */
> + vma->detached = true;
> +#endif
> + INIT_LIST_HEAD(&vma->anon_vma_chain);
> + vma_lock_init(vma);
> +}
> +
> +#ifdef CONFIG_PER_VMA_LOCK
> +
> +static void vma_clear(struct vm_area_struct *vma, struct mm_struct *mm)
> +{
> + vma->vm_mm = mm;
> + vma->vm_ops = &vma_dummy_vm_ops;
> + vma->vm_start = 0;
> + vma->vm_end = 0;
> + vma->anon_vma = NULL;
> + vma->vm_pgoff = 0;
> + vma->vm_file = NULL;
> + vma->vm_private_data = NULL;
> + vm_flags_init(vma, 0);
> + memset(&vma->vm_page_prot, 0, sizeof(vma->vm_page_prot));
> + memset(&vma->shared, 0, sizeof(vma->shared));
> + memset(&vma->vm_userfaultfd_ctx, 0, sizeof(vma->vm_userfaultfd_ctx));
> + vma_numab_state_init(vma);
> +#ifdef CONFIG_ANON_VMA_NAME
> + vma->anon_name = NULL;
> +#endif
> +#ifdef CONFIG_SWAP
> + memset(&vma->swap_readahead_info, 0, sizeof(vma->swap_readahead_info));
> +#endif
> +#ifndef CONFIG_MMU
> + vma->vm_region = NULL;
> +#endif
> +#ifdef CONFIG_NUMA
> + vma->vm_policy = NULL;
> +#endif
> +}
> +
> +static void vma_copy(const struct vm_area_struct *src, struct vm_area_struct *dest)
> +{
> + dest->vm_mm = src->vm_mm;
> + dest->vm_ops = src->vm_ops;
> + dest->vm_start = src->vm_start;
> + dest->vm_end = src->vm_end;
> + dest->anon_vma = src->anon_vma;
> + dest->vm_pgoff = src->vm_pgoff;
> + dest->vm_file = src->vm_file;
> + dest->vm_private_data = src->vm_private_data;
> + vm_flags_init(dest, src->vm_flags);
> + memcpy(&dest->vm_page_prot, &src->vm_page_prot,
> + sizeof(dest->vm_page_prot));
> + memcpy(&dest->shared, &src->shared, sizeof(dest->shared));
> + memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx,
> + sizeof(dest->vm_userfaultfd_ctx));
> +#ifdef CONFIG_ANON_VMA_NAME
> + dest->anon_name = src->anon_name;
> +#endif
> +#ifdef CONFIG_SWAP
> + memcpy(&dest->swap_readahead_info, &src->swap_readahead_info,
> + sizeof(dest->swap_readahead_info));
> +#endif
> +#ifndef CONFIG_MMU
> + dest->vm_region = src->vm_region;
> +#endif
> +#ifdef CONFIG_NUMA
> + dest->vm_policy = src->vm_policy;
> +#endif
> +}
> +
> +#else /* CONFIG_PER_VMA_LOCK */
> +
> +static void vma_clear(struct vm_area_struct *vma, struct mm_struct *mm)
> +{
> + vma_init(vma, mm);
> +}
> +
> +static void vma_copy(const struct vm_area_struct *src, struct vm_area_struct *dest)
> +{
> + /*
> + * orig->shared.rb may be modified concurrently, but the clone
> + * will be reinitialized.
> + */
> + data_race(memcpy(dest, src, sizeof(*dest)));
> +}
> +
> +#endif /* CONFIG_PER_VMA_LOCK */
> +
> struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
> {
> struct vm_area_struct *vma;
> @@ -444,7 +536,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
> if (!vma)
> return NULL;
>
> - vma_init(vma, mm);
> + vma_clear(vma, mm);
>
> return vma;
> }
> @@ -458,49 +550,46 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
>
> ASSERT_EXCLUSIVE_WRITER(orig->vm_flags);
> ASSERT_EXCLUSIVE_WRITER(orig->vm_file);
> - /*
> - * orig->shared.rb may be modified concurrently, but the clone
> - * will be reinitialized.
> - */
> - data_race(memcpy(new, orig, sizeof(*new)));
> - vma_lock_init(new);
> - INIT_LIST_HEAD(&new->anon_vma_chain);
> -#ifdef CONFIG_PER_VMA_LOCK
> - /* vma is not locked, can't use vma_mark_detached() */
> - new->detached = true;
> -#endif
> + vma_copy(orig, new);
> vma_numab_state_init(new);
> dup_anon_vma_name(orig, new);
>
> return new;
> }
>
> -void __vm_area_free(struct vm_area_struct *vma)
> +static void __vm_area_free(struct vm_area_struct *vma, bool unreachable)
> {
> +#ifdef CONFIG_PER_VMA_LOCK
> + /*
> + * With SLAB_TYPESAFE_BY_RCU, vma can be reused and we need
> + * vma->detached to be set before vma is returned into the cache.
> + * This way reused object won't be used by readers until it's
> + * initialized and reattached.
> + * If vma is unreachable, there can be no other users and we
> + * can set vma->detached directly with no risk of a race.
> + * If vma is reachable, then it should have been already detached
> + * under vma write-lock or it was never attached.
> + */
> + if (unreachable)
> + vma->detached = true;
> + else
> + VM_BUG_ON_VMA(!is_vma_detached(vma), vma);
> + vma->vm_lock_seq = UINT_MAX;
> +#endif
> + VM_BUG_ON_VMA(!list_empty(&vma->anon_vma_chain), vma);
> vma_numab_state_free(vma);
> free_anon_vma_name(vma);
> kmem_cache_free(vm_area_cachep, vma);
> }
>
> -#ifdef CONFIG_PER_VMA_LOCK
> -static void vm_area_free_rcu_cb(struct rcu_head *head)
> +void vm_area_free(struct vm_area_struct *vma)
> {
> - struct vm_area_struct *vma = container_of(head, struct vm_area_struct,
> - vm_rcu);
> -
> - /* The vma should not be locked while being destroyed. */
> - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma);
> - __vm_area_free(vma);
> + __vm_area_free(vma, false);
> }
> -#endif
>
> -void vm_area_free(struct vm_area_struct *vma)
> +void vm_area_free_unreachable(struct vm_area_struct *vma)
> {
> -#ifdef CONFIG_PER_VMA_LOCK
> - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb);
> -#else
> - __vm_area_free(vma);
> -#endif
> + __vm_area_free(vma, true);
> }
>
> static void account_kernel_stack(struct task_struct *tsk, int account)
> @@ -3141,6 +3230,12 @@ void __init mm_cache_init(void)
>
> void __init proc_caches_init(void)
> {
> + struct kmem_cache_args args = {
> + .use_freeptr_offset = true,
> + .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr),
> + .ctor = vm_area_ctor,
> + };
> +
> sighand_cachep = kmem_cache_create("sighand_cache",
> sizeof(struct sighand_struct), 0,
> SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
> @@ -3157,9 +3252,11 @@ void __init proc_caches_init(void)
> sizeof(struct fs_struct), 0,
> SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
> NULL);
> - vm_area_cachep = KMEM_CACHE(vm_area_struct,
> - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC|
> + vm_area_cachep = kmem_cache_create("vm_area_struct",
> + sizeof(struct vm_area_struct), &args,
> + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
> SLAB_ACCOUNT);
> +
> mmap_init();
> nsproxy_cache_init();
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index b252f19b28c9..6f4d4d423835 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -6368,10 +6368,16 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> if (!vma)
> goto inval;
>
> - if (!vma_start_read(vma))
> + if (!vma_start_read(mm, vma))
> goto inval;
>
> - /* Check if the VMA got isolated after we found it */
> + /*
> + * Check if the VMA got isolated after we found it.
> + * Note: vma we found could have been recycled and is being reattached.
> + * It's possible to attach a vma while it is read-locked, however a
> + * read-locked vma can't be detached (detaching requires write-locking).
> + * Therefore if this check passes, we have an attached and stable vma.
> + */
> if (is_vma_detached(vma)) {
> vma_end_read(vma);
> count_vm_vma_lock_event(VMA_LOCK_MISS);
> @@ -6385,8 +6391,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> * fields are accessible for RCU readers.
> */
>
> - /* Check since vm_start/vm_end might change before we lock the VMA */
> - if (unlikely(address < vma->vm_start || address >= vma->vm_end))
> + /* Check if the vma we locked is the right one. */
> + if (unlikely(vma->vm_mm != mm ||
> + address < vma->vm_start || address >= vma->vm_end))
> goto inval_end_read;
>
> rcu_read_unlock();
> diff --git a/mm/vma.c b/mm/vma.c
> index cdc63728f47f..648784416833 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -414,7 +414,7 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable)
> fput(vma->vm_file);
> mpol_put(vma_policy(vma));
> if (unreachable)
> - __vm_area_free(vma);
> + vm_area_free_unreachable(vma);
> else
> vm_area_free(vma);
> }
> diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
> index 0cdc5f8c3d60..3eeb1317cc69 100644
> --- a/tools/testing/vma/vma_internal.h
> +++ b/tools/testing/vma/vma_internal.h
> @@ -685,14 +685,15 @@ static inline void mpol_put(struct mempolicy *)
> {
> }
>
> -static inline void __vm_area_free(struct vm_area_struct *vma)
> +static inline void vm_area_free(struct vm_area_struct *vma)
> {
> free(vma);
> }
>
> -static inline void vm_area_free(struct vm_area_struct *vma)
> +static inline void vm_area_free_unreachable(struct vm_area_struct *vma)
> {
> - __vm_area_free(vma);
> + vma->detached = true;
> + vm_area_free(vma);
> }
>
> static inline void lru_add_drain(void)
Download attachment "config.gz" of type "application/gzip" (24042 bytes)
View attachment "oops-bisect" of type "text/plain" (3245 bytes)
Download attachment "oops-decoded.gz" of type "application/gzip" (7128 bytes)
Powered by blists - more mailing lists