[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0083c5c2-8a53-3aca-543b-16bd764e31ab@linux.intel.com>
Date: Thu, 25 Oct 2018 10:00:07 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: tglx@...utronix.de, mingo@...hat.com, acme@...nel.org,
linux-kernel@...r.kernel.org, bp@...en8.de, ak@...ux.intel.com,
eranian@...gle.com
Subject: Re: [PATCH 1/2] perf: Add munmap callback
On 10/24/2018 8:29 PM, Peter Zijlstra wrote:
> On Wed, Oct 24, 2018 at 08:11:15AM -0700, kan.liang@...ux.intel.com wrote:
>> +void perf_event_munmap(void)
>> +{
>> + struct perf_cpu_context *cpuctx;
>> + unsigned long flags;
>> + struct pmu *pmu;
>> +
>> + local_irq_save(flags);
>
> It is impossible to get here with IRQs already disabled.
I don't think so. Based on my test, IRQs are still enabled here. I once
observed dead lock with my stress test. So I have to explicitly disable
the IRQs here.
>
>> + list_for_each_entry(cpuctx, this_cpu_ptr(&sched_cb_list), sched_cb_entry) {
>> + pmu = cpuctx->ctx.pmu;
>> +
>> + if (!pmu->munmap)
>> + continue;
>> +
>> + perf_ctx_lock(cpuctx, cpuctx->task_ctx);
>> + perf_pmu_disable(pmu);
>> +
>> + pmu->munmap();
>> +
>> + perf_pmu_enable(pmu);
>> +
>> + perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
>> + }
>> + local_irq_restore(flags);
>> +}
>> +
>> static void perf_event_switch(struct task_struct *task,
>> struct task_struct *next_prev, bool sched_in);
>>
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 5f2b2b184c60..61978ad8c480 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -2777,6 +2777,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>> /*
>> * Remove the vma's, and unmap the actual pages
>> */
>> + perf_event_munmap();
>
> I think that if you add the munmap hook, you should do it right and at
> least do it such that we can solve the other munmap problem.
>
Is this patch you mentioned?
https://lkml.org/lkml/2017/1/27/452
I will take a look and find a right place for both problems.
Thanks,
Kan
>> detach_vmas_to_be_unmapped(mm, vma, prev, end);
>> unmap_region(mm, vma, prev, start, end);
>>
>> --
>> 2.17.1
>>
Powered by blists - more mailing lists