lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1287506215.2530.187.camel@localhost.localdomain>
Date:	Tue, 19 Oct 2010 12:36:55 -0400
From:	Eric Paris <eparis@...hat.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	linux-kernel@...r.kernel.org,
	linux-security-module@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, hch@...radead.org, zohar@...ibm.com,
	warthog9@...nel.org, david@...morbit.com, jmorris@...ei.org,
	kyle@...artin.ca, hpa@...or.com, akpm@...ux-foundation.org,
	mingo@...e.hu, viro@...iv.linux.org.uk
Subject: Re: [PATCH 1/3] IMA: move read/write counters into struct inode

On Tue, 2010-10-19 at 08:52 -0700, Linus Torvalds wrote:
> On Mon, Oct 18, 2010 at 6:16 PM, Eric Paris <eparis@...hat.com> wrote:
> >
> > IMA currently alocated an inode integrity structure for every inode in
> > core.  This stucture is about 120 bytes long.  Most files however
> > (especially on a system which doesn't make use of IMA) will never need any
> > of this space.  The problem is that if IMA is enabled we need to know
> > information about the number of readers and the number of writers for every
> > inode on the box.  At the moment we collect that information in the per
> > inode iint structure and waste the rest of the space.  This patch moves those
> > counters into the struct inode so we can eventually stop allocating an IMA
> > integrity structure except when absolutely needed.
> 
> Hmm. I don't think this is really acceptable as-is.
> 
> First off (and most trivially) - the fields are misnamed. Just calling
> them "{open,read,write}count" was fine when it was part of an ima
> structure, but for all the historical reasons, inode fields are called
> 'i_xyzzy'.

Will fix.

> Secondly, we already maintain a write count (called "i_writecount").
> Why is the IMA writecount different, and should it be?

I ask Al about reusing this field long ago and he indicated it had a
very different meaning.  I can't remember what he indicated it meant off
the top of my head but I'll take a look at it again.  Lines like this
leave me leary:

drivers/md/md.c::deny_bitmap_write_access()
	atomic_set(&inode->i_writecount, -1);

> Thirdly, why is it an "unsigned long"? Are the IMA numbers cumulative
> or something? How could you ever overflow a 32-bit counter if not?

Not cumulative.  32bits would probably be fine.

> Finally, why does IMA even care about the read-counts vs open-counts?
> Why not just open-counts, and consider any non-write to be an open?

What IMA needs to function is the current readers and current writers.
The open count was originally very useful when a number of places inside
the kernel were allocating struct files themselves rather than letting
the VFS do the lifting and we could end up with more struct files to a
given inode than IMA realized.  Back then IMA started trying to do
one-off hooks to each filesystem doing this to fix the counters and
measure appropriately but we eventually decided it was best to move all
struct file creation into the vfs so it couldn't get out of whack.  I
believe at this point we could drop the opencount....

> In short, I think this patch would be _much_ more acceptable if it
> added just a _single_ 32-bit "i_opencount". And even then I'd ask
> "what's the difference between i_opencount and our already existing
> i_count?

i_count, I believe, is much different.  i_count is counting the number
of dentries in core referencing the inode, even if none of them are
being used in any struct file or if one dentry is being referenced in
1000 struct files.  The IMA counters are from a higher level, they
counts the number of struct files referencing this inode.

I'll resend, shrinking from unsigned long to unsigned int and dropping
opencount from struct inode.  Should get us from using ~900 bytes per
inode to using about 8 bytes per inode.

And like I said, if that still seems like too much overhead to most
people (and it seems that's the case) I'll look at how to get down to 0,
but it isn't going to be a fast obvious change...

-Eric


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ