lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 23 Nov 2021 10:33:41 +0100
From:   Marco Elver <elver@...gle.com>
To:     Huang Ying <ying.huang@...el.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        syzbot+aa5bebed695edaccf0df@...kaller.appspotmail.com,
        Nadav Amit <namit@...are.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>
Subject: Re: [PATCH] mm/rmap: fix potential batched TLB flush race

On Tue, 23 Nov 2021 at 08:44, Huang Ying <ying.huang@...el.com> wrote:
>
> In theory, the following race is possible for batched TLB flushing.
>
> CPU0                               CPU1
> ----                               ----
> shrink_page_list()
>                                    unmap
>                                      zap_pte_range()
>                                        flush_tlb_batched_pending()
>                                          flush_tlb_mm()
>   try_to_unmap()
>     set_tlb_ubc_flush_pending()
>       mm->tlb_flush_batched = true
>                                          mm->tlb_flush_batched = false
>
> After the TLB is flushed on CPU1 via flush_tlb_mm() and before
> mm->tlb_flush_batched is set to false, some PTE is unmapped on CPU0
> and the TLB flushing is pended.  Then the pended TLB flushing will be
> lost.  Although both set_tlb_ubc_flush_pending() and
> flush_tlb_batched_pending() are called with PTL locked, different PTL
> instances may be used.
>
> Because the race window is really small, and the lost TLB flushing
> will cause problem only if a TLB entry is inserted before the
> unmapping in the race window, the race is only theoretical.  But the
> fix is simple and cheap too.

Thanks for fixing this!

> Syzbot has reported this too as follows,
>
> ==================================================================
> BUG: KCSAN: data-race in flush_tlb_batched_pending / try_to_unmap_one
[...]
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index c3a6e6209600..789778067db9 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -632,7 +632,7 @@ struct mm_struct {
>                 atomic_t tlb_flush_pending;
>  #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
>                 /* See flush_tlb_batched_pending() */
> -               bool tlb_flush_batched;
> +               atomic_t tlb_flush_batched;
>  #endif
>                 struct uprobes_state uprobes_state;
>  #ifdef CONFIG_PREEMPT_RT
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 163ac4e6bcee..60902c3cfb4a 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -633,7 +633,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
>          * before the PTE is cleared.
>          */
>         barrier();
> -       mm->tlb_flush_batched = true;
> +       atomic_inc(&mm->tlb_flush_batched);

The use of barrier() and atomic needs some clarification. Is there a
requirement that the CPU also doesn't reorder anything after this
atomic_inc() (which is unordered)? I.e. should this be
atomic_inc_return_release() and remove barrier()?

>         /*
>          * If the PTE was dirty then it's best to assume it's writable. The
> @@ -680,15 +680,16 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
>   */
>  void flush_tlb_batched_pending(struct mm_struct *mm)
>  {
> -       if (data_race(mm->tlb_flush_batched)) {
> -               flush_tlb_mm(mm);
> +       int batched = atomic_read(&mm->tlb_flush_batched);
>
> +       if (batched) {
> +               flush_tlb_mm(mm);
>                 /*
> -                * Do not allow the compiler to re-order the clearing of
> -                * tlb_flush_batched before the tlb is flushed.
> +                * If the new TLB flushing is pended during flushing,
> +                * leave mm->tlb_flush_batched as is, to avoid to lose
> +                * flushing.
>                  */
> -               barrier();
> -               mm->tlb_flush_batched = false;
> +               atomic_cmpxchg(&mm->tlb_flush_batched, batched, 0);
>         }
>  }
>  #else
> --
> 2.30.2
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ