[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4B445957.3040301@gmail.com>
Date: Wed, 06 Jan 2010 10:35:19 +0100
From: Jiri Slaby <jirislaby@...il.com>
To: Arnd Bergmann <arnd@...db.de>
CC: mingo@...e.hu, nhorman@...driver.com, sfr@...b.auug.org.au,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
marcin.slusarz@...il.com, tglx@...utronix.de, mingo@...hat.com,
hpa@...or.com, torvalds@...ux-foundation.org,
James Morris <jmorris@...ei.org>,
Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: [PATCH 15/16] COMPAT: add get/put_compat_rlimit
On 12/31/2009 12:55 AM, Arnd Bergmann wrote:
> On Wednesday 18 November 2009, Jiri Slaby wrote:
>> set_fs(KERNEL_DS);
>> - ret = sys_setrlimit(resource, (struct rlimit __user *) &r);
>> + ret = sys_setrlimit(resource, (struct rlimit __force __user *)&r);
>> set_fs(old_fs);
>> return ret;
>
> Since you are already rewriting the whole function here, it would be
> nice if you could just call do_setrlimit() with the kernel pointer
> instead of the set_fs() and __force tricks. For getrlimit, it may
> be easier to just open-code the whole function, and for your new
> functions, you could pass the pid into do_setrlimit instead of the
> task in order to reduce code duplication between compat_sys_setprlimit
> and sys_setprlimit.
Hmm, using pid_t wouldn't work well with pid namespaces. E.g. a call
from /proc code. But certainly some cleanups may be performed, at least
in {compat_,}sys_setrlimit case: pushing (resource >= RLIM_NLIMITS) test
down and calling do_setrlimit from compat_sys_setrlimit is
straightforward. Will look at the rest too.
> Yes, I realize my reply is late in this thread, but I assume your patch
> is still current since it hasn't made it into 2.6.33.
Yup, as you expressed, it's still not upstream, hence it can be tuned up
easily.
Thanks,
--
js
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists