[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <525D4713.3060706@gmail.com>
Date: Tue, 15 Oct 2013 07:45:55 -0600
From: David Ahern <dsahern@...il.com>
To: Ingo Molnar <mingo@...nel.org>, Namhyung Kim <namhyung@...nel.org>
CC: acme@...stprotocols.net, linux-kernel@...r.kernel.org,
Frederic Weisbecker <fweisbec@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Jiri Olsa <jolsa@...hat.com>, Mike Galbraith <efault@....de>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH] perf record: mmap output file - v2
On 10/15/13 1:44 AM, Ingo Molnar wrote:
>
> * Namhyung Kim <namhyung@...nel.org> wrote:
>
>> [SNIP]
>>> +/* mmap file big chunks at a time */
>>> +#define MMAP_OUTPUT_SIZE (64*1024*1024)
>>
>> Why did you choose 64MB for the size? Did you also test other sizes?
>
> Btw., should this value go up if the ring buffer (mmap_pages) is larger
> than 64MB?
>
I made mmap_size a variable:
+ size_t mmap_size; /* size of mmap segments */
with the above initial value. I was planning to make it in an option and
just forgot to complete it.
Why 64M? mmap / munmap are also system calls and I was looking to trade
off huge file size jumps versus the frequency of adjusting the maps. 64M
Was just a nice round number between 1 and 100. 8, 16 are too small. 128
seems to big for a default. That left only 32 and 64. 64M seems the
better trade off of the two.
Making it a user knob would help with smaller deployments. Could also
have mmap_size = 0 mean turn it off (use write over mmap).
Perhaps something that adjust automatically would be useful too. e.g.,
For the case that motivates the change I have 16 cpus each with a 4M
buffer (1024 mmap pages). Should we generically set the size:
mmap_size = ncpus_online * mmap_pages * page_size?
Do that only for system wide profiling?
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists