lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1287505483.2530.174.camel@localhost.localdomain>
Date:	Tue, 19 Oct 2010 12:24:43 -0400
From:	Eric Paris <eparis@...hat.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	Christoph Hellwig <hch@...radead.org>,
	linux-kernel@...r.kernel.org,
	linux-security-module@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, zohar@...ibm.com,
	warthog9@...nel.org, jmorris@...ei.org, kyle@...artin.ca,
	hpa@...or.com, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org, mingo@...e.hu,
	viro@...iv.linux.org.uk
Subject: Re: [PATCH 1/3] IMA: move read/write counters into struct inode

On Tue, 2010-10-19 at 18:39 +1100, Dave Chinner wrote:
> On Mon, Oct 18, 2010 at 10:14:03PM -0400, Eric Paris wrote:

> Eric, just to put that in context - changing the size of an inode
> needs to be conidered carefully because we cache so many of them. We
> often jump through hoops just to reduce it by 4 or 8 bytes. You are
> proposing to increase it by 24 bytes (roughly 5%) and as such that
> _should_ be considered a big deal, especially for something that is
> currently rarely used.

In my mind it's framed a little differently, my patch series is reducing
it from ~900 bytes to 24 bytes.  Even though that memory might not have
been inside struct inode if there is always a 1-1 mapping it might as
well be....   I'm going from seriously broken to a hell of a lot better.
I believe that when I resend this series I'll drop 8 more of those bytes
(open count as I think we can do without that these days).

> Personally I that adding a pointer into the struct inode is as much
> as I'd want to compromise to. Those that want to use IMA or have the
> possibility of turning it on dynamicaly can accept the additional
> overhead of another memory allocation during inode allocation as the
> cost of using this functionality.  That's the way the security
> subsystem works, so I don't see any problems with doing this for IMA
> and it turns the overhead problem into one that only affects those
> that have it both configured and enabled.  That seems like a
> reasonable compromise to me....

The problem is that this would actually waste another 8 bytes (the size
of the pointer in struct inode) since IMA is still going to need to
allocate a structure for every inode to hold the 16-24 bytes of
counters.  That 16-24 might not be in struct inode, but like I said, if
there is a 1-1 mapping between the two there is no difference.

I said that if there was a consensus that this overhead was still too
large (and it seems that may be the case) I would put looking at using a
userspace freezer to attempt to collect the information dynamically on
my todo list.  I'll gladly do that but we have a space/time tradeoff I'd
rather have a consensus on before I start.

If I go the pointer in struct inode route, I don't need to serialize
entry and removal from core of every inode if IMA is enabled (while I
add and remove it from the IMA lookup tree.)  If I don't add any fields
to struct inode I'll need to serialize while I add them to the IMA
lookup tree, but at the savings of a void * in struct inode.

My guess is that most people will say forcing users to serialize and
saving 8bytes per inode is the better choice, but I know there is
scalability work going on and I want to make sure everyone agrees that
is the right choice before we spend a lot of time on anything like
this...

-Eric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ