[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1509506888-4053-1-git-send-email-changbin.du@intel.com>
Date: Wed, 1 Nov 2017 11:28:08 +0800
From: changbin.du@...el.com
To: rostedt@...dmis.org, mingo@...hat.com
Cc: linux-kernel@...r.kernel.org, Changbin Du <changbin.du@...el.com>
Subject: [PATCH v3] tracing: Allocate mask_str buffer dynamically
From: Changbin Du <changbin.du@...el.com>
The default NR_CPUS can be very large, but actual possible nr_cpu_ids
usually is very small. For my x86 distribution, the NR_CPUS is 8192 and
nr_cpu_ids is 4. About 2 pages are wasted.
Most machines don't have so many CPUs, so define a array with NR_CPUS
just wastes memory. So let's allocate the buffer dynamically when need.
The exact buffer size should be:
DIV_ROUND_UP(nr_cpu_ids, 4) + nr_cpu_ids/32 + 2;
Example output:
ff,ffffffff
With this change, the mutext tracing_cpumask_update_lock also can be
removed now, which was used to protect mask_str.
Signed-off-by: Changbin Du <changbin.du@...el.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
---
v3:
- remove tracing_cpumask_update_lock which was used to protect mask_str. (Rostedt)
v2:
- remove 'static' declaration.
- fix buffer size.
---
kernel/trace/trace.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 752e5da..5d2ec80 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4178,37 +4178,30 @@ static const struct file_operations show_traces_fops = {
.llseek = seq_lseek,
};
-/*
- * The tracer itself will not take this lock, but still we want
- * to provide a consistent cpumask to user-space:
- */
-static DEFINE_MUTEX(tracing_cpumask_update_lock);
-
-/*
- * Temporary storage for the character representation of the
- * CPU bitmask (and one more byte for the newline):
- */
-static char mask_str[NR_CPUS + 1];
-
static ssize_t
tracing_cpumask_read(struct file *filp, char __user *ubuf,
size_t count, loff_t *ppos)
{
struct trace_array *tr = file_inode(filp)->i_private;
+ char *mask_str;
int len;
- mutex_lock(&tracing_cpumask_update_lock);
+ /* Bitmap, ',' and two more bytes for the newline and '\0'. */
+ len = DIV_ROUND_UP(nr_cpu_ids, 4) + nr_cpu_ids/32 + 2;
+ mask_str = kmalloc(len, GFP_KERNEL);
+ if (!mask_str)
+ return -ENOMEM;
- len = snprintf(mask_str, count, "%*pb\n",
+ len = snprintf(mask_str, len, "%*pb\n",
cpumask_pr_args(tr->tracing_cpumask));
if (len >= count) {
count = -EINVAL;
goto out_err;
}
- count = simple_read_from_buffer(ubuf, count, ppos, mask_str, NR_CPUS+1);
+ count = simple_read_from_buffer(ubuf, count, ppos, mask_str, len);
out_err:
- mutex_unlock(&tracing_cpumask_update_lock);
+ kfree(mask_str);
return count;
}
@@ -4228,8 +4221,6 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
if (err)
goto err_unlock;
- mutex_lock(&tracing_cpumask_update_lock);
-
local_irq_disable();
arch_spin_lock(&tr->max_lock);
for_each_tracing_cpu(cpu) {
@@ -4252,8 +4243,6 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
local_irq_enable();
cpumask_copy(tr->tracing_cpumask, tracing_cpumask_new);
-
- mutex_unlock(&tracing_cpumask_update_lock);
free_cpumask_var(tracing_cpumask_new);
return count;
--
2.7.4
Powered by blists - more mailing lists