[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1288045749.2655.49.camel@localhost.localdomain>
Date: Mon, 25 Oct 2010 18:29:09 -0400
From: Eric Paris <eparis@...hat.com>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: John Stoffel <john@...ffel.org>, linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org,
linux-fsdevel@...r.kernel.org, hch@...radead.org, zohar@...ibm.com,
warthog9@...nel.org, david@...morbit.com, jmorris@...ei.org,
kyle@...artin.ca, akpm@...ux-foundation.org,
torvalds@...ux-foundation.org, mingo@...e.hu,
viro@...iv.linux.org.uk
Subject: Re: [PATCH 06/11] IMA: use i_writecount rather than a private
counter
On Mon, 2010-10-25 at 15:25 -0700, H. Peter Anvin wrote:
> On 10/25/2010 02:52 PM, Eric Paris wrote:
> > On Mon, 2010-10-25 at 15:27 -0400, John Stoffel wrote:
> >
> >> The problems with kernel.org is a perfect exmaple of how an annocuous
> >> feature like this, can kill a system's performance.
> >
> > You admit that you don't know what you are talking about and then state
> > that this kills systems performance. Interesting conclusion.
> >
> > I'm not going to try to refute you point by point but will instead paint
> > a broad picture. I see 3 possible states:
> > 1) Configured out - 0 overhead. period.
> > 2) Configured in but default disabled
> > 3) Configured in and enabled by admin intervention
> >
> > I have (I think) pretty clearly discussed the overhead and the changes
> > made in case #2. We expand struct inode by 4 bytes, we increment and
> > decrement those 4 bytes on open/close() and we use a new inode->i_flags.
> >
>
> Case #2 is the bad one, as long as distros are likely to compile it in.
Agreed. And that's the case this whole patch series is addressing. It
makes it (literally not figuratively) hundreds of times better than it
is today :)
-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists