lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Oct 2010 10:22:30 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Eric Paris <eparis@...hat.com>
Cc:	linux-kernel@...r.kernel.org,
	linux-security-module@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, hch@...radead.org, zohar@...ibm.com,
	warthog9@...nel.org, jmorris@...ei.org, kyle@...artin.ca,
	hpa@...or.com, akpm@...ux-foundation.org,
	torvalds@...ux-foundation.org, mingo@...e.hu,
	viro@...iv.linux.org.uk
Subject: Re: [PATCH 01/11] IMA: use rbtree instead of radix tree for inode
 information cache

On Mon, Oct 25, 2010 at 02:41:18PM -0400, Eric Paris wrote:
> The IMA code needs to store the number of tasks which have an open fd
> granting permission to write a file even when IMA is not in use.  It needs
> this information in order to be enabled at a later point in time without
> losing it's integrity garantees.  At the moment that means we store a
> little bit of data about every inode in a cache.  We use a radix tree key'd
> on the inode's memory address.  Dave Chinner pointed out that a radix tree
> is a terrible data structure for such a sparse key space.  This patch
> switches to using an rbtree which should be more efficient.

I'm not sure this is the right fix, though.

Realistically, there is a 1:1 relationship between the inode and the
IMA information. I fail to see why an external index is needed here
at all - just use a separate structure to store the IMA information
that the inode points to. That makes the need for a new global index
and global lock go away completely.

You're already adding 8 bytes to the inode, so why not make it a
pointer. We've got 4 conditions:

1. not configured - no overhead
2. configured, boot time disabled - 8 bytes per inode
3. configured, boot time enabled, runtime disabled - 8 bytes per
inode + small IMA structure
4. configured, boot time enabled, runtime enabled - 8 bytes per
inode + large IMA structure

Anyone who wants the option of runtime enablement can take the extra
allocation overhead, but otherwise nobody is affected apart from 8
bytes of additional memory per inode. I doubt that will change
anything unless it increases the size of the inode enough to push it
over slab boundaries. And if LSM stacking is introduced, then that 8
bytes per inode overhead will go away, anyway.

This approach doesn't introduce new global lock and lookup overhead
into the main VFS paths, allows you to remove a bunch of code and
has a path forward for removing the 8 byte per inode overhead as
well. Seems like the best compromise to me....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ