[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878tufrrqb.fsf@ashishki-desk.ger.corp.intel.com>
Date: Mon, 26 Sep 2016 11:27:08 +0300
From: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>,
Andi Kleen <ak@...ux.intel.com>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
vince@...ter.net, eranian@...gle.com,
Arnaldo Carvalho de Melo <acme@...radead.org>,
tglx@...utronix.de
Subject: Re: [RFC PATCH 1/6] perf: Move mlock accounting to ring buffer allocation
Peter Zijlstra <peterz@...radead.org> writes:
> Well, we could 'force' inject a VMA into the process's address space, we
> do that for a few other things as well. It also makes for less
> exceptions with the actual core dumping.
Threads then will end up with the same buffer (through sharing the mm),
but they can't really share trace buffers.
Also, system core dump is still a problem.
> But the worry I have is the total amount of pinned memory. If you want
> to inherit this on fork(), as is a reasonable expectation, then its
> possible to quickly exceed the total amount of pinnable memory.
>
> At which point we _should_ start failing fork(), which is a somewhat
> unexpected, and undesirable side-effect.
I'm not sure I see why we should fail fork() when we run out of pinned
memory.
> Ideally we'd unpin the old buffers and repin the new buffers on context
> switch, but that's impossible since faulting needs scheduling,
> recursion, we loose.
Or we can have per-cpu buffers for all user's tasks, record where each
task starts and ends in each buffer and cut out only bits relevant to
the task(s) that dump core.
Regards,
--
Alex
Powered by blists - more mailing lists