lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <xr93lfehl8al.fsf@gthelen.svl.corp.google.com>
Date:   Tue, 01 Dec 2020 09:56:18 -0800
From:   Greg Thelen <gthelen@...gle.com>
To:     Axel Rasmussen <axelrasmussen@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Chinwen Chang <chinwen.chang@...iatek.com>,
        Daniel Jordan <daniel.m.jordan@...cle.com>,
        David Rientjes <rientjes@...gle.com>,
        Davidlohr Bueso <dbueso@...e.de>,
        Ingo Molnar <mingo@...hat.com>, Jann Horn <jannh@...gle.com>,
        Laurent Dufour <ldufour@...ux.ibm.com>,
        Michel Lespinasse <walken@...gle.com>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        Steven Rostedt <rostedt@...dmis.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Yafang Shao <laoar.shao@...il.com>,
        "David S . Miller" <davem@...emloft.net>, dsahern@...nel.org,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Jakub Kicinski <kuba@...nel.org>, liuhangbin@...il.com,
        Tejun Heo <tj@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: mmap_lock: fix use-after-free race and css ref leak
 in tracepoints

Axel Rasmussen <axelrasmussen@...gle.com> wrote:

> On Mon, Nov 30, 2020 at 5:34 PM Shakeel Butt <shakeelb@...gle.com> wrote:
>>
>> On Mon, Nov 30, 2020 at 3:43 PM Axel Rasmussen <axelrasmussen@...gle.com> wrote:
>> >
>> > syzbot reported[1] a use-after-free introduced in 0f818c4bc1f3. The bug
>> > is that an ongoing trace event might race with the tracepoint being
>> > disabled (and therefore the _unreg() callback being called). Consider
>> > this ordering:
>> >
>> > T1: trace event fires, get_mm_memcg_path() is called
>> > T1: get_memcg_path_buf() returns a buffer pointer
>> > T2: trace_mmap_lock_unreg() is called, buffers are freed
>> > T1: cgroup_path() is called with the now-freed buffer
>>
>> Any reason to use the cgroup_path instead of the cgroup_ino? There are
>> other examples of trace points using cgroup_ino and no need to
>> allocate buffers. Also cgroup namespace might complicate the path
>> usage.
>
> Hmm, so in general I would love to use a numeric identifier instead of a string.
>
> I did some reading, and it looks like the cgroup_ino() mainly has to
> do with writeback, instead of being just a general identifier?
> https://www.kernel.org/doc/Documentation/cgroup-v2.txt
>
> There is cgroup_id() which I think is almost what I'd want, but there
> are a couple problems with it:
>
> - I don't know of a way for userspace to translate IDs -> paths, to
> make them human readable?

The id => name map can be built from user space with a tree walk.
Example:

$ find /sys/fs/cgroup/memory -type d -printf '%i %P\n'                                                                          # ~ [main] 
20387 init.scope                                                         
31 system.slice

> - Also I think the ID implementation we use for this is "dense",
> meaning if a cgroup is removed, its ID is likely to be quickly reused.
>
>>
>> >
>> > The solution in this commit is to modify trace_mmap_lock_unreg() to
>> > first stop new buffers from being handed out, and then to wait (spin)
>> > until any existing buffer references are dropped (i.e., those trace
>> > events complete).
>> >
>> > I have a simple reproducer program which spins up two pools of threads,
>> > doing the following in a tight loop:
>> >
>> >   Pool 1:
>> >   mmap(NULL, 4096, PROT_READ | PROT_WRITE,
>> >        MAP_PRIVATE | MAP_ANONYMOUS, -1, 0)
>> >   munmap()
>> >
>> >   Pool 2:
>> >   echo 1 > /sys/kernel/debug/tracing/events/mmap_lock/enable
>> >   echo 0 > /sys/kernel/debug/tracing/events/mmap_lock/enable
>> >
>> > This triggers the use-after-free very quickly. With this patch, I let it
>> > run for an hour without any BUGs.
>> >
>> > While fixing this, I also noticed and fixed a css ref leak. Previously
>> > we called get_mem_cgroup_from_mm(), but we never called css_put() to
>> > release that reference. get_mm_memcg_path() now does this properly.
>> >
>> > [1]: https://syzkaller.appspot.com/bug?extid=19e6dd9943972fa1c58a
>> >
>> > Fixes: 0f818c4bc1f3 ("mm: mmap_lock: add tracepoints around lock acquisition")
>>
>> The original patch is in mm tree, so the SHA1 is not stabilized.
>> Usually Andrew squash the fixes into the original patches.
>
> Ah, I added this because it also shows up in linux-next, under the
> next-20201130 tag. I'll remove it in v2, squashing is fine. :)
>
>>
>> > Signed-off-by: Axel Rasmussen <axelrasmussen@...gle.com>
>> > ---
>> >  mm/mmap_lock.c | 100 +++++++++++++++++++++++++++++++++++++++++--------
>> >  1 file changed, 85 insertions(+), 15 deletions(-)
>> >
>> > diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
>> > index 12af8f1b8a14..be38dc58278b 100644
>> > --- a/mm/mmap_lock.c
>> > +++ b/mm/mmap_lock.c
>> > @@ -3,6 +3,7 @@
>> >  #include <trace/events/mmap_lock.h>
>> >
>> >  #include <linux/mm.h>
>> > +#include <linux/atomic.h>
>> >  #include <linux/cgroup.h>
>> >  #include <linux/memcontrol.h>
>> >  #include <linux/mmap_lock.h>
>> > @@ -18,13 +19,28 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released);
>> >  #ifdef CONFIG_MEMCG
>> >
>> >  /*
>> > - * Our various events all share the same buffer (because we don't want or need
>> > - * to allocate a set of buffers *per event type*), so we need to protect against
>> > - * concurrent _reg() and _unreg() calls, and count how many _reg() calls have
>> > - * been made.
>> > + * This is unfortunately complicated... _reg() and _unreg() may be called
>> > + * in parallel, separately for each of our three event types. To save memory,
>> > + * all of the event types share the same buffers. Furthermore, trace events
>> > + * might happen in parallel with _unreg(); we need to ensure we don't free the
>> > + * buffers before all inflights have finished. Because these events happen
>> > + * "frequently", we also want to prevent new inflights from starting once the
>> > + * _unreg() process begins. And, for performance reasons, we want to avoid any
>> > + * locking in the trace event path.
>> > + *
>> > + * So:
>> > + *
>> > + * - Use a spinlock to serialize _reg() and _unreg() calls.
>> > + * - Keep track of nested _reg() calls with a lock-protected counter.
>> > + * - Define a flag indicating whether or not unregistration has begun (and
>> > + *   therefore that there should be no new buffer uses going forward).
>> > + * - Keep track of inflight buffer users with a reference count.
>> >   */
>> >  static DEFINE_SPINLOCK(reg_lock);
>> > -static int reg_refcount;
>> > +static int reg_types_rc; /* Protected by reg_lock. */
>> > +static bool unreg_started; /* Doesn't need synchronization. */
>> > +/* atomic_t instead of refcount_t, as we want ordered inc without locks. */
>> > +static atomic_t inflight_rc = ATOMIC_INIT(0);
>> >
>> >  /*
>> >   * Size of the buffer for memcg path names. Ignoring stack trace support,
>> > @@ -46,9 +62,14 @@ int trace_mmap_lock_reg(void)
>> >         unsigned long flags;
>> >         int cpu;
>> >
>> > +       /*
>> > +        * Serialize _reg() and _unreg(). Without this, e.g. _unreg() might
>> > +        * start cleaning up while _reg() is only partially completed.
>> > +        */
>> >         spin_lock_irqsave(&reg_lock, flags);
>> >
>> > -       if (reg_refcount++)
>> > +       /* If the refcount is going 0->1, proceed with allocating buffers. */
>> > +       if (reg_types_rc++)
>> >                 goto out;
>> >
>> >         for_each_possible_cpu(cpu) {
>> > @@ -62,6 +83,11 @@ int trace_mmap_lock_reg(void)
>> >                 per_cpu(memcg_path_buf_idx, cpu) = 0;
>> >         }
>> >
>> > +       /* Reset unreg_started flag, allowing new trace events. */
>> > +       WRITE_ONCE(unreg_started, false);
>> > +       /* Add the registration +1 to the inflight refcount. */
>> > +       atomic_inc(&inflight_rc);
>> > +
>> >  out:
>> >         spin_unlock_irqrestore(&reg_lock, flags);
>> >         return 0;
>> > @@ -74,7 +100,8 @@ int trace_mmap_lock_reg(void)
>> >                         break;
>> >         }
>> >
>> > -       --reg_refcount;
>> > +       /* Since we failed, undo the earlier increment. */
>> > +       --reg_types_rc;
>> >
>> >         spin_unlock_irqrestore(&reg_lock, flags);
>> >         return -ENOMEM;
>> > @@ -87,9 +114,23 @@ void trace_mmap_lock_unreg(void)
>> >
>> >         spin_lock_irqsave(&reg_lock, flags);
>> >
>> > -       if (--reg_refcount)
>> > +       /* If the refcount is going 1->0, proceed with freeing buffers. */
>> > +       if (--reg_types_rc)
>> >                 goto out;
>> >
>> > +       /* This was the last registration; start preventing new events... */
>> > +       WRITE_ONCE(unreg_started, true);
>> > +       /* Remove the registration +1 from the inflight refcount. */
>> > +       atomic_dec(&inflight_rc);
>> > +       /*
>> > +        * Wait for inflight refcount to be zero (all inflights stopped). Since
>> > +        * we have a spinlock we can't sleep, so just spin. Because trace events
>> > +        * are "fast", and because we stop new inflights from starting at this
>> > +        * point with unreg_started, this should be a short spin.
>> > +        */
>> > +       while (atomic_read(&inflight_rc))
>> > +               barrier();
>> > +
>> >         for_each_possible_cpu(cpu) {
>> >                 kfree(per_cpu(memcg_path_buf, cpu));
>> >         }
>> > @@ -102,6 +143,20 @@ static inline char *get_memcg_path_buf(void)
>> >  {
>> >         int idx;
>> >
>> > +       /*
>> > +        * If unregistration is happening, stop. Yes, this check is racy;
>> > +        * that's fine. It just means _unreg() might spin waiting for an extra
>> > +        * event or two. Use-after-free is actually prevented by the refcount.
>> > +        */
>> > +       if (READ_ONCE(unreg_started))
>> > +               return NULL;
>> > +       /*
>> > +        * Take a reference, unless the registration +1 has been released
>> > +        * and there aren't already existing inflights (refcount is zero).
>> > +        */
>> > +       if (!atomic_inc_not_zero(&inflight_rc))
>> > +               return NULL;
>> > +
>> >         idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) -
>> >               MEMCG_PATH_BUF_SIZE;
>> >         return &this_cpu_read(memcg_path_buf)[idx];
>> > @@ -110,27 +165,42 @@ static inline char *get_memcg_path_buf(void)
>> >  static inline void put_memcg_path_buf(void)
>> >  {
>> >         this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE);
>> > +       /* We're done with this buffer; drop the reference. */
>> > +       atomic_dec(&inflight_rc);
>> >  }
>> >
>> >  /*
>> >   * Write the given mm_struct's memcg path to a percpu buffer, and return a
>> > - * pointer to it. If the path cannot be determined, NULL is returned.
>> > + * pointer to it. If the path cannot be determined, or no buffer was available
>> > + * (because the trace event is being unregistered), NULL is returned.
>> >   *
>> >   * Note: buffers are allocated per-cpu to avoid locking, so preemption must be
>> >   * disabled by the caller before calling us, and re-enabled only after the
>> >   * caller is done with the pointer.
>> > + *
>> > + * The caller must call put_memcg_path_buf() once the buffer is no longer
>> > + * needed. This must be done while preemption is still disabled.
>> >   */
>> >  static const char *get_mm_memcg_path(struct mm_struct *mm)
>> >  {
>> > +       char *buf = NULL;
>> >         struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm);
>> >
>> > -       if (memcg != NULL && likely(memcg->css.cgroup != NULL)) {
>> > -               char *buf = get_memcg_path_buf();
>> > +       if (memcg == NULL)
>> > +               goto out;
>> > +       if (unlikely(memcg->css.cgroup == NULL))
>> > +               goto out_put;
>> >
>> > -               cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE);
>> > -               return buf;
>> > -       }
>> > -       return NULL;
>> > +       buf = get_memcg_path_buf();
>> > +       if (buf == NULL)
>> > +               goto out_put;
>> > +
>> > +       cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE);
>> > +
>> > +out_put:
>> > +       css_put(&memcg->css);
>> > +out:
>> > +       return buf;
>> >  }
>> >
>> >  #define TRACE_MMAP_LOCK_EVENT(type, mm, ...)                                   \
>> > --
>> > 2.29.2.454.gaff20da3a2-goog
>> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ