lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpHnSEbs9y23jz30okuHZXLXBExWc_ZLvOgtwL3EMab=Ng@mail.gmail.com>
Date:   Mon, 26 Jun 2023 20:51:04 +0000
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     akpm@...ux-foundation.org
Cc:     willy@...radead.org, torvalds@...uxfoundation.org,
        vegard.nossum@...cle.com, mpe@...erman.id.au,
        Liam.Howlett@...cle.com, lrh2000@....edu.cn, mgorman@...e.de,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        kernel-team@...roid.com
Subject: Re: [PATCH 1/3] mm: change vma_start_read to fail if VMA got detached
 from under it

On Tue, Jun 20, 2023 at 11:57 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> Current implementation of vma_start_read() checks VMA for being locked
> before taking vma->vm_lock and then checks that again. This mechanism
> fails to detect a case when the VMA gets write-locked, modified and
> unlocked after the first check but before the vma->vm_lock is obtained.
> While this is not strictly a problem (vma_start_read would not produce
> a false unlocked result), this allows it to successfully lock a VMA which
> got detached from the VMA tree while vma_start_read was locking it.
> New condition checks for any change in vma->vm_lock_seq after we obtain
> vma->vm_lock and will cause vma_start_read() to fail if the above race
> occurs.

Just a friendly ping for feedback.
I know most people were busy fixing the stack expansion problem and
these patches are not urgent, so no rush. If nobody reviews them, I'll
ping again next week.

>
> Fixes: 5e31275cc997 ("mm: add per-VMA lock and helper functions to control it")
> Suggested-by: Matthew Wilcox <willy@...radead.org>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> ---
>  include/linux/mm.h | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 27ce77080c79..8410da79c570 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -639,23 +639,24 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
>   */
>  static inline bool vma_start_read(struct vm_area_struct *vma)
>  {
> -       /* Check before locking. A race might cause false locked result. */
> -       if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
> +       int vm_lock_seq = READ_ONCE(vma->vm_lock_seq);
> +
> +       /*
> +        * Check if VMA is locked before taking vma->vm_lock. A race or
> +        * mm_lock_seq overflow might cause false locked result.
> +        */
> +       if (vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))
>                 return false;
>
>         if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0))
>                 return false;
>
> -       /*
> -        * Overflow might produce false locked result.
> -        * False unlocked result is impossible because we modify and check
> -        * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq
> -        * modification invalidates all existing locks.
> -        */
> -       if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {
> +       /* Fail if VMA was write-locked after we checked it earlier */
> +       if (unlikely(vm_lock_seq != READ_ONCE(vma->vm_lock_seq))) {
>                 up_read(&vma->vm_lock->lock);
>                 return false;
>         }
> +
>         return true;
>  }
>
> --
> 2.41.0.162.gfafddb0af9-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ