[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110426135309.GA24213@Krystal>
Date: Tue, 26 Apr 2011 09:53:10 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arnaldo Carvalho de Melo <acme@...radead.org>,
Paul Mackerras <paulus@...ba.org>,
Pekka Enberg <penberg@...nel.org>,
Vegard Nossum <vegardno@....uio.no>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [BUG] perf and kmemcheck : fatal combination
* Eric Dumazet (eric.dumazet@...il.com) wrote:
> Le mardi 26 avril 2011 à 10:57 +0200, Eric Dumazet a écrit :
> > Le mardi 26 avril 2011 à 10:04 +0200, Ingo Molnar a écrit :
> >
> > > Eric, does it manage to limp along if you remove the BUG_ON()?
> > >
> > > That risks NMI recursion but maybe it allows you to see why things are slow,
> > > before it crashes ;-)
> > >
> >
> > If I remove the BUG_ON from nmi_enter, it seems to crash very fast
> >
> >
>
> Before you ask, some more complete netconsole traces :
[...]
> [ 306.657279] [<ffffffff8147a48f>] page_fault+0x1f/0x30
> [ 306.657282] [<ffffffff8100ef42>] ? x86_perf_event_update+0x12/0x70
> [ 306.657284] [<ffffffff810104b1>] ? intel_pmu_save_and_restart+0x11/0x20
> [ 306.657287] [<ffffffff81012e84>] intel_pmu_handle_irq+0x1d4/0x420
> [ 306.657290] [<ffffffff8147b570>] perf_event_nmi_handler+0x50/0xc0
> [ 306.657292] [<ffffffff8147cfa3>] notifier_call_chain+0x53/0x80
> [ 306.657294] [<ffffffff8147d018>] __atomic_notifier_call_chain+0x48/0x70
> [ 306.657296] [<ffffffff8147d051>] atomic_notifier_call_chain+0x11/0x20
> [ 306.657298] [<ffffffff8147d08e>] notify_die+0x2e/0x30
> [ 306.657300] [<ffffffff8147a8af>] do_nmi+0x4f/0x200
> [ 306.657302] [<ffffffff8147a6ea>] nmi+0x1a/0x20
> [ 306.657304] [<ffffffff8100fd4d>] ? intel_pmu_enable_all+0x9d/0x110
just a thought: I've seen this kind of issue with LTTng before, and my
approach is to ensure this does not happen by issuing a
vmalloc_sync_all() call between all vmalloc/vmap calls and accesses to
those memory regions from the tracer code. So it boild down to :
1 - perform all memory allocation at trace session creation (from thread
context). I do the page table in software (and allocate my buffer
pages with alloc_pages()), so not page fault is generated by those
accesses. However, I use kmalloc() to allocate my own
software-page-table, which uses vmalloc if the allocation is larger
than a certain threshold. Therefore, I need to issue
vmalloc_sync_all() before NMI starts using the buffers.
2 - issue vmalloc_sync_all() from the tracer code, after buffer
allocation, but before the trace session is added to the RCU list of
active traces.
3 - issue vmalloc_sync_all() when each LTTng module is loaded, before
they are registered to LTTng, so the memory used to keep their
code and data is faulted in.
Until we find time and resources to finally implement the virtualized
NMI handling (which handles pages faults within NMIs) as discussed with
Linus last summer, I am staying with this work-around. It might be good
enough for perf too.
Thanks,
Mathieu
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists