[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <93e2744a-6220-4c44-b7a8-a709c84bd788@schaufler-ca.com>
Date: Mon, 25 Nov 2024 12:49:31 -0800
From: Casey Schaufler <casey@...aufler-ca.com>
To: "Dr. Greg" <greg@...ellic.com>
Cc: Song Liu <songliubraving@...a.com>,
James Bottomley <James.Bottomley@...senPartnership.com>,
"jack@...e.cz" <jack@...e.cz>, "brauner@...nel.org" <brauner@...nel.org>,
Song Liu <song@...nel.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-security-module@...r.kernel.org"
<linux-security-module@...r.kernel.org>, Kernel Team <kernel-team@...a.com>,
"andrii@...nel.org" <andrii@...nel.org>,
"eddyz87@...il.com" <eddyz87@...il.com>, "ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"martin.lau@...ux.dev" <martin.lau@...ux.dev>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"kpsingh@...nel.org" <kpsingh@...nel.org>,
"mattbobrowski@...gle.com" <mattbobrowski@...gle.com>,
"amir73il@...il.com" <amir73il@...il.com>,
"repnop@...gle.com" <repnop@...gle.com>,
"jlayton@...nel.org" <jlayton@...nel.org>, Josef Bacik
<josef@...icpanda.com>, "mic@...ikod.net" <mic@...ikod.net>,
"gnoack@...gle.com" <gnoack@...gle.com>,
Casey Schaufler <casey@...aufler-ca.com>
Subject: Re: [PATCH bpf-next 0/4] Make inode storage available to tracing prog
On 11/23/2024 9:01 AM, Dr. Greg wrote:
>>> Here is another thought in all of this.
>>>
>>> I've mentioned the old IMA integrity inode cache a couple of times in
>>> this thread. The most peacable path forward may be to look at
>>> generalizing that architecture so that a sub-system that wanted inode
>>> local storage could request that an inode local storage cache manager
>>> be implemented for it.
>>>
>>> That infrastructure was based on a red/black tree that used the inode
>>> pointer as a key to locate a pointer to a structure that contained
>>> local information for the inode. That takes away the need to embed
>>> something in the inode structure proper.
>>>
>>> Since insertion and lookup times have complexity functions that scale
>>> with tree height it would seem to be a good fit for sparse utilization
>>> scenarios.
>>>
>>> An extra optimization that may be possible would be to maintain an
>>> indicator flag tied the filesystem superblock that would provide a
>>> simple binary answer as to whether any local inode cache managers have
>>> been registered for inodes on a filesystem. That would allow the
>>> lookup to be completely skipped with a simple conditional test.
>>>
>>> If the infrastructure was generalized to request and release cache
>>> managers it would be suitable for systems, implemented as modules,
>>> that have a need for local inode storage.
>> Do you think that over the past 20 years no one has thought of this?
>> We're working to make the LSM infrastructure cleaner and more
>> robust. Adding the burden of memory management to each LSM is a
>> horrible idea.
> No, I cannot ascribe to the notion that I, personally, know what
> everyone has thought about in the last 20 years.
>
> I do know, personally, that very talented individuals who are involved
> with large security sensitive operations question the trajectory of
> the LSM. That, however, is a debate for another venue.
I invite anyone who would "question the trajectory" of the LSM to
step up and do so publicly. I don't claim to be the most talented
individual working in the security community, but I am busting my
butt to get the work done. Occasionally I've had corporate backing,
but generally I've been doing it as a hobbyist on my own time. You
can threaten the LSM developers with the wrath of "large security
sensitive operations", but in the absence of participation in the
process you can't expect much to change.
> For the lore record and everyone reading along at home, you
> misinterpreted or did not read closely my e-mail.
>
> We were not proposing adding memory management to each LSM, we were
> suggesting to Song Liu that generalizing, what was the old IMA inode
> integrity infrastructure, may be a path forward for sub-systems that
> need inode local storage, particularly systems that have sparse
> occupancy requirements.
>
> Everyone has their britches in a knicker about performance.
Darn Toot'n! You can't work in security for very long before you
run up against those who hate security because of the real or
perceived performance impact. To be successful in security development
it is essential to have a good grasp of the impact on other aspects
of the system. It is vital to understand how the implementation
affects others and why it is the best way to accomplish the goals.
> Note that we called out a possible optimization for this architecture
> so that there would be no need to even hit the r/b tree if a
> filesystem had no sub-systems that had requested sparse inode local
> storage for that filesystem.
>
>>> It also offers the ability for implementation independence, which is
>>> always a good thing in the Linux community.
>> Generality for the sake of generality is seriously overrated.
>> File systems have to be done so as to fit into the VFS infrastructure,
>> network protocols have to work with sockets without impacting the
>> performance of others and so forth.
> We were not advocating generality for the sake of generality, we were
> suggesting a generalized architecture, that does not require expansion
> of struct inode, because Christian has publically indicated there is
> no appetite by the VFS maintainers for consuming additional space in
> struct inode for infrastructure requiring local inode storage.
>
> You talk about cooperation, yet you object to any consideration that
> the LSM should participate in a shared arena environment where
> sub-systems wanting local inode storage could just request a block in
> a common arena. The LSM, in this case, is just like a filesystem
> since it is a consumer of infrastructure supplied by the VFS and
> should thus cooperate with other consumers of VFS infrastructure.
I am perfectly open to a change that improves LSM performance.
This would not be the case with this proposal.
Powered by blists - more mailing lists