[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F25CEB5.8070704@ixiacom.com>
Date: Sun, 29 Jan 2012 14:56:53 -0800
From: Earl Chew <echew@...acom.com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>,
Andrew Morton <akpm@...ux-foundation.org>
CC: Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Eric Paris <eparis@...hat.com>,
"Serge E. Hallyn" <serge.hallyn@...onical.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
<adobriyan@...il.com>
Subject: Re: [PATCH] Support single byte reads from integers published in
procfs by kernel/sysctl.c
> If you are interested in fixing this properly with a tiny buffer
> reachable from struct file I think this can be worth fixing. I think
> this is doable by using seq_file in proc_sys_read.
I've looked into making proc_sys_read() use seq_file, but there are a few
issues to work through.
I'm assuming that the existing kernel modules must continue to use the
ctl_table/proc_dointvec/etc interface without requiring patching.
The two main consequences of this are:
a. The existing struct ctl_table interface should be preserved.
Kernel modules use this to publish into procfs.
b. The existing proc_dointvec(), etc, functions must be preserved
because they are exported via EXPORT_SYMBOL. Kernel modules
may rely on these symbols in arbitrary ways.
I tried use seq_file in proc_sys_read, and it comes close to solving
the problem. The issue is that proc_dointvec() requires a __user pointer.
A seq_file has an internal buffer that could be filled by proc_dointvec() -- but that
seq_file buffer is in kernel space.
This is not a problem isolated to the seq_file implementation. I think that
any approach involving a small local buffer would mean that that buffer is
in kernel space, and that buffer cannot be passed to proc_dointvec() as it
stands now because proc_dointvec() requires a __user buffer.
One approach that might work is to add a new field to struct ctl_table :
struct ctl_table
{
...
proc_seq_handler *proc_seq_handler;
};
Existing modules can continue to use proc_dointvec() etc and fixed
code can use the new proc_seq_handler interface using the new
proc_seq_dointvec() etc.
Although this preserves the old interface, it seems to me that this
approach is flawed in that kernel modules using struct ctl_table and proc_dointvec()
continue to be broken -- they are not fixed.
So, perhaps breaking assumption (b) might be a reasonable thing to do ?
If we accept that proc_dointvec() etc are only or mainly used in the
context of filling out struct ctl_table, and that other kernel modules
don't use proc_dointvec() in other contexts, then we can change the
call signature of proc_dointvec() to stop using a __user pointer:
1. Change signature of proc_dointvec() etc to stop using __user pointer for reads
2. Change definition of typedef proc_call_handler to stop using __user pointer for reads
3. In kernel/sysctl.c don't use copy_to_user() for reads
Then proc_sys_read() can use proc_dointvec() etc to fill seq_file.
For writes, the existing behaviour needs to be preserved. One approach
that would solve this would be to add:
union proc_buffer {
void __user *uptr;
void *kptr;
};
typedef int proc_handler (struct ctl_table *ctl, int write,
union proc_buffer buffer, size_t *lenp, loff_t *ppos);
I think this would be preferable to either casting away __user for reads, or
adding two pointers to the proc_handler signature (one with __user, one without).
If write is indicated, use buffer.uptr, and if not the handlers would use buffer.kptr.
But this is not perfect. Kernel modules containing their _own_ proc_handler
definitions would now be broken severely.
And now I circle back to the proc_seq_handler approach. Each ctl_table client
must be fixed individually (ie switch from proc_handler to proc_seq_handler).
What do you think ?
Earl
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists