lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Oct 2014 14:38:55 +0200
From:	Peter Zijlstra <>
To:	Alexander Shishkin <>
Cc:	Ingo Molnar <>,,
	Robert Richter <>,
	Frederic Weisbecker <>,
	Mike Galbraith <>,
	Paul Mackerras <>,
	Stephane Eranian <>,
	Andi Kleen <>,,,
Subject: Re: [PATCH v5 18/20] perf: Allocate ring buffers for inherited
 per-task kernel events

On Mon, Oct 13, 2014 at 04:45:46PM +0300, Alexander Shishkin wrote:
> Normally, per-task events can't be inherited parents' ring buffers to
> avoid multiple events contending for the same buffer. And since buffer
> allocation is typically done by the userspace consumer, there is no
> practical interface to allocate new buffers for inherited counters.
> However, for kernel users we can allocate new buffers for inherited
> events as soon as they are created (and also reap them on event
> destruction). This pattern has a number of use cases, such as event
> sample annotation and process core dump annotation.
> When a new event is inherited from a per-task kernel event that has a
> ring buffer, allocate a new buffer for this event so that data from the
> child task is collected and can later be retrieved for sample annotation
> or core dump inclusion. This ring buffer is released when the event is
> freed, for example, when the child task exits.

This causes a pinned memory explosion, not at all nice that.

I think I see why and all, but it would be ever so good to not have to
allocate so much memory.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists