lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28FEFAE6-ABEE-454C-AF59-8491FAB08E77@fb.com>
Date: Thu, 21 Nov 2024 08:28:05 +0000
From: Song Liu <songliubraving@...a.com>
To: "Dr. Greg" <greg@...ellic.com>
CC: Casey Schaufler <casey@...aufler-ca.com>,
        Song Liu
	<songliubraving@...a.com>,
        James Bottomley
	<James.Bottomley@...senPartnership.com>,
        "jack@...e.cz" <jack@...e.cz>,
        "brauner@...nel.org" <brauner@...nel.org>, Song Liu <song@...nel.org>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
        "linux-fsdevel@...r.kernel.org"
	<linux-fsdevel@...r.kernel.org>,
        "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>,
        "linux-security-module@...r.kernel.org"
	<linux-security-module@...r.kernel.org>,
        Kernel Team <kernel-team@...a.com>,
        "andrii@...nel.org" <andrii@...nel.org>,
        "eddyz87@...il.com"
	<eddyz87@...il.com>,
        "ast@...nel.org" <ast@...nel.org>,
        "daniel@...earbox.net" <daniel@...earbox.net>,
        "martin.lau@...ux.dev"
	<martin.lau@...ux.dev>,
        "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
        "kpsingh@...nel.org" <kpsingh@...nel.org>,
        "mattbobrowski@...gle.com"
	<mattbobrowski@...gle.com>,
        "amir73il@...il.com" <amir73il@...il.com>,
        "repnop@...gle.com" <repnop@...gle.com>,
        "jlayton@...nel.org"
	<jlayton@...nel.org>,
        Josef Bacik <josef@...icpanda.com>,
        "mic@...ikod.net"
	<mic@...ikod.net>,
        "gnoack@...gle.com" <gnoack@...gle.com>
Subject: Re: [PATCH bpf-next 0/4] Make inode storage available to tracing prog

Hi Dr. Greg,

Thanks for your input!

> On Nov 20, 2024, at 8:54 AM, Dr. Greg <greg@...ellic.com> wrote:
> 
> On Tue, Nov 19, 2024 at 10:14:29AM -0800, Casey Schaufler wrote:

[...]

> 
>>> 2.) Implement key/value mapping for inode specific storage.
>>> 
>>> The key would be a sub-system specific numeric value that returns a
>>> pointer the sub-system uses to manage its inode specific memory for a
>>> particular inode.
>>> 
>>> A participating sub-system in turn uses its identifier to register an
>>> inode specific pointer for its sub-system.
>>> 
>>> This strategy loses O(1) lookup complexity but reduces total memory
>>> consumption and only imposes memory costs for inodes when a sub-system
>>> desires to use inode specific storage.
> 
>> SELinux and Smack use an inode blob for every inode. The performance
>> regression boggles the mind. Not to mention the additional
>> complexity of managing the memory.
> 
> I guess we would have to measure the performance impacts to understand
> their level of mind boggliness.
> 
> My first thought is that we hear a huge amount of fanfare about BPF
> being a game changer for tracing and network monitoring.  Given
> current networking speeds, if its ability to manage storage needed for
> it purposes are truely abysmal the industry wouldn't be finding the
> technology useful.
> 
> Beyond that.
> 
> As I noted above, the LSM could be an independent subscriber.  The
> pointer to register would come from the the kmem_cache allocator as it
> does now, so that cost is idempotent with the current implementation.
> The pointer registration would also be a single instance cost.
> 
> So the primary cost differential over the common arena model will be
> the complexity costs associated with lookups in a red/black tree, if
> we used the old IMA integrity cache as an example implementation.
> 
> As I noted above, these per inode local storage structures are complex
> in of themselves, including lists and locks.  If touching an inode
> involves locking and walking lists and the like it would seem that
> those performance impacts would quickly swamp an r/b lookup cost.

bpf local storage is designed to be an arena like solution that works
for multiple bpf maps (and we don't know how many of maps we need 
ahead of time). Therefore, we may end up doing what you suggested 
earlier: every LSM should use bpf inode storage. ;) I am only 90%
kidding. 

> 
>>> Approach 2 requires the introduction of generic infrastructure that
>>> allows an inode's key/value mappings to be located, presumably based
>>> on the inode's pointer value.  We could probably just resurrect the
>>> old IMA iint code for this purpose.
>>> 
>>> In the end it comes down to a rather standard trade-off in this
>>> business, memory vs. execution cost.
>>> 
>>> We would posit that option 2 is the only viable scheme if the design
>>> metric is overall good for the Linux kernel eco-system.
> 
>> No. Really, no. You need look no further than secmarks to understand
>> how a key based blob allocation scheme leads to tears. Keys are fine
>> in the case where use of data is sparse. They have no place when data
>> use is the norm.
> 
> Then it would seem that we need to get everyone to agree that we can
> get by with using two pointers in struct inode.  One for uses best
> served by common arena allocation and one for a key/pointer mapping,
> and then convert the sub-systems accordingly.
> 
> Or alternately, getting everyone to agree that allocating a mininum of
> eight additional bytes for every subscriber to private inode data
> isn't the end of the world, even if use of the resource is sparse.

Christian suggested we can use an inode_addon structure, which is 
similar to this idea. It won't work well in all contexts, though. 
So it is not as good as other bpf local storage (task, sock, cgroup). 

Thanks,
Song

> 
> Of course, experience would suggest, that getting everyone in this
> community to agree on something is roughly akin to throwing a hand
> grenade into a chicken coop with an expectation that all of the
> chickens will fly out in a uniform flock formation.
> 
> As always,
> Dr. Greg
> 
> The Quixote Project - Flailing at the Travails of Cybersecurity
>              https://github.com/Quixote-Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ