lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Sep 2020 17:35:20 +0200 (CEST)
From:   Michael Petlan <mpetlan@...hat.com>
To:     Jiri Olsa <jolsa@...hat.com>
cc:     Namhyung Kim <namhyung@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        lkml <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Wade Mealing <wmealing@...hat.com>
Subject: Re: [PATCH] perf: Fix race in perf_mmap_close function

On Mon, 14 Sep 2020, Jiri Olsa wrote:
> On Mon, Sep 14, 2020 at 09:48:43PM +0900, Namhyung Kim wrote:
> > On Fri, Sep 11, 2020 at 4:49 PM Jiri Olsa <jolsa@...hat.com> wrote:
> > > ugh, that's right.. how about change below
> > 
> > Acked-by: Namhyung Kim <namhyung@...nel.org>
> 
> thanks, I'll send full patch after we're done testing this
> 
> jirka

I've tested the change and seems OK to me.

Tested-by: Michael Petlan <mpetlan@...hat.com>

> 
> > 
> > Thanks
> > Namhyung
> > 
> > 
> > >
> > > jirka
> > >
> > >
> > > ---
> > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > index 7ed5248f0445..8ab2400aef55 100644
> > > --- a/kernel/events/core.c
> > > +++ b/kernel/events/core.c
> > > @@ -5868,11 +5868,11 @@ static void perf_pmu_output_stop(struct perf_event *event);
> > >  static void perf_mmap_close(struct vm_area_struct *vma)
> > >  {
> > >         struct perf_event *event = vma->vm_file->private_data;
> > > -
> > >         struct perf_buffer *rb = ring_buffer_get(event);
> > >         struct user_struct *mmap_user = rb->mmap_user;
> > >         int mmap_locked = rb->mmap_locked;
> > >         unsigned long size = perf_data_size(rb);
> > > +       bool detach_rest = false;
> > >
> > >         if (event->pmu->event_unmapped)
> > >                 event->pmu->event_unmapped(event, vma->vm_mm);
> > > @@ -5903,7 +5903,8 @@ static void perf_mmap_close(struct vm_area_struct *vma)
> > >                 mutex_unlock(&event->mmap_mutex);
> > >         }
> > >
> > > -       atomic_dec(&rb->mmap_count);
> > > +       if (atomic_dec_and_test(&rb->mmap_count))
> > > +               detach_rest = true;
> > >
> > >         if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))
> > >                 goto out_put;
> > > @@ -5912,7 +5913,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
> > >         mutex_unlock(&event->mmap_mutex);
> > >
> > >         /* If there's still other mmap()s of this buffer, we're done. */
> > > -       if (atomic_read(&rb->mmap_count))
> > > +       if (!detach_rest)
> > >                 goto out_put;
> > >
> > >         /*
> > >
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ