[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181024191529.GF15106@kernel.org>
Date: Wed, 24 Oct 2018 16:15:29 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: "Liang, Kan" <kan.liang@...ux.intel.com>
Cc: Andi Kleen <ak@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
bp@...en8.de, Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH 1/2] perf: Add munmap callback
Em Wed, Oct 24, 2018 at 02:12:54PM -0400, Liang, Kan escreveu:
>
>
> On 10/24/2018 12:32 PM, Arnaldo Carvalho de Melo wrote:
> > Em Wed, Oct 24, 2018 at 09:23:34AM -0700, Andi Kleen escreveu:
> > > > +void perf_event_munmap(void)
> > > > +{
> > > > + struct perf_cpu_context *cpuctx;
> > > > + unsigned long flags;
> > > > + struct pmu *pmu;
> > > > +
> > > > + local_irq_save(flags);
> > > > + list_for_each_entry(cpuctx, this_cpu_ptr(&sched_cb_list), sched_cb_entry) {
> > >
> > > Would be good have a fast path here that checks for the list being empty
> > > without disabling the interrupts. munmap can be somewhat hot. I think
> > > it's ok to make it slower with perf running, but we shouldn't impact
> > > it without perf.
> >
> > Right, look at how its counterpart, perf_event_mmap() works:
> >
> > void perf_event_mmap(struct vm_area_struct *vma)
> > {
> > struct perf_mmap_event mmap_event;
> >
> > if (!atomic_read(&nr_mmap_events))
> > return;
> > <SNIP>
> > }
> >
>
> Thanks. I'll add the nr_mmap_events check in V2.
That would be a nr_munmap_events, if this is tied to PERF_RECORD_MUNMAP
(right?), which it isn't right now, check Andi's response, mine was more
of a "hey, perf_event_mmap does an atomic check, before grabbing any
locks".
- Arnaldo
Powered by blists - more mailing lists