[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130911162955.GA1103@infradead.org>
Date: Wed, 11 Sep 2013 09:29:55 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Peng Tao <bergwolf@...il.com>
Cc: Guenter Roeck <linux@...ck-us.net>,
Christoph Hellwig <hch@...radead.org>,
"Dilger, Andreas" <andreas.dilger@...el.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"devel@...verdev.osuosl.org" <devel@...verdev.osuosl.org>
Subject: Re: [PATCH] staging: Disable lustre file system for MIPS, SH, and
XTENSA
On Wed, Sep 11, 2013 at 10:51:50AM +0800, Peng Tao wrote:
> I'm not fighting against removing the piece of code. But if there is a
> strong reason to keep the functionality, we need to find a way to
> implement it. The convenience of using environment variables is that
> job scheduler can set the environment and other existing applications
> don't have to change. Are there other means to do the same? ioctl and
> upcall both need application change AFAIK.
There is no use case for it, the kernel has no business looking at these
variables. Given that you think it's not even used I don't even know
why we're having this discussion.
Talking about nasty code, the whole linux-curproc.c is highly
questionable:
- cfs_curproc_groups_nr:
unused and should be removed
- cfs_cap_raise/cfs_cap_lower/cfs_cap_raised:
needs to go away, modyules must not change access permissions
on behalf of processes
- the whole cfs_cap_t handling also needs to go away, passing around
capabilities is not a concept the kernel supports for a reason
- current_is_32bit:
Code should just use is_compat_task directly.
I've just taken the time to walk through this one file, but it seems
like most of libcfs is just as bad.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists