[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48DC406D.1050508@redhat.com>
Date: Thu, 25 Sep 2008 21:52:45 -0400
From: Masami Hiramatsu <mhiramat@...hat.com>
To: Steven Rostedt <rostedt@...dmis.org>
CC: LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
prasad@...ux.vnet.ibm.com,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mathieu Desnoyers <compudj@...stal.dyndns.org>,
"Frank Ch. Eigler" <fche@...hat.com>,
David Wilder <dwilder@...ibm.com>, hch@....de,
Martin Bligh <mbligh@...gle.com>,
Christoph Hellwig <hch@...radead.org>,
Steven Rostedt <srostedt@...hat.com>
Subject: Re: [RFC PATCH v4] Unified trace buffer
Hi Steven,
Steven Rostedt wrote:
> This version has been cleaned up a bit. I've been running it as
> a back end to ftrace, and it has been handling pretty well.
Thank you for your great work.
It seems good to me(especially, encapsulating events :)).
I have one request of enhancement.
> +static struct ring_buffer_per_cpu *
> +ring_buffer_allocate_cpu_buffer(struct ring_buffer *buffer, int cpu)
> +{
[...]
> + cpu_buffer->pages = kzalloc_node(ALIGN(sizeof(void *) * pages,
> + cache_line_size()), GFP_KERNEL,
> + cpu_to_node(cpu));
Here, you are using a slab object for page managing array,
the largest object size is 128KB(x86-64), so it can contain
16K pages = 64MB.
As I had improved relayfs, in some rare case(on 64bit arch),
we'd like to use larger buffer than 64MB.
http://sourceware.org/ml/systemtap/2008-q2/msg00103.html
So, I think similar hack can be applicable.
Would it be acceptable for the next version?
Thank you,
--
Masami Hiramatsu
Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division
e-mail: mhiramat@...hat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists