lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 20 Jun 2024 18:54:30 +0200
From: Roberto Sassu <roberto.sassu@...weicloud.com>
To: "Dr. Greg" <greg@...ellic.com>
Cc: Paul Moore <paul@...l-moore.com>, corbet@....net, jmorris@...ei.org, 
 serge@...lyn.com, akpm@...ux-foundation.org, shuah@...nel.org, 
 mcoquelin.stm32@...il.com, alexandre.torgue@...s.st.com, mic@...ikod.net, 
 linux-security-module@...r.kernel.org, linux-doc@...r.kernel.org, 
 linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, 
 bpf@...r.kernel.org, zohar@...ux.ibm.com, dmitry.kasatkin@...il.com, 
 linux-integrity@...r.kernel.org, wufan@...ux.microsoft.com,
 pbrobinson@...il.com,  zbyszek@...waw.pl, hch@....de, mjg59@...f.ucam.org,
 pmatilai@...hat.com,  jannh@...gle.com, dhowells@...hat.com,
 jikos@...nel.org, mkoutny@...e.com,  ppavlu@...e.com, petr.vorel@...il.com,
 mzerqung@...inter.de, kgold@...ux.ibm.com,  Roberto Sassu
 <roberto.sassu@...wei.com>
Subject: Re: [PATCH v4 00/14] security: digest_cache LSM

On Thu, 2024-06-20 at 11:32 -0500, Dr. Greg wrote:
> On Wed, Jun 19, 2024 at 06:37:49PM +0200, Roberto Sassu wrote:
> 
> Good morning Roberto, I hope your week is going well, greetings to
> everyone copied else as well.
> 
> > On Wed, 2024-06-19 at 12:34 -0400, Paul Moore wrote:
> > > On Wed, Jun 19, 2024 at 11:55???AM Roberto Sassu
> > > <roberto.sassu@...weicloud.com> wrote:
> > > > On Wed, 2024-06-19 at 11:49 -0400, Paul Moore wrote:
> > > > > On Wed, Jun 19, 2024 at 3:59???AM Roberto Sassu
> > > > > <roberto.sassu@...weicloud.com> wrote:
> > > > > > On Tue, 2024-06-18 at 19:20 -0400, Paul Moore wrote:
> > > > > > > On Mon, Apr 15, 2024 at 10:25???AM Roberto Sassu
> > > > > > > <roberto.sassu@...weicloud.com> wrote:
> > > > > > > > 
> > > > > > > > From: Roberto Sassu <roberto.sassu@...wei.com>
> > > > > > > > 
> > > > > > > > Integrity detection and protection has long been a desirable feature, to
> > > > > > > > reach a large user base and mitigate the risk of flaws in the software
> > > > > > > > and attacks.
> > > > > > > > 
> > > > > > > > However, while solutions exist, they struggle to reach the large user
> > > > > > > > base, due to requiring higher than desired constraints on performance,
> > > > > > > > flexibility and configurability, that only security conscious people are
> > > > > > > > willing to accept.
> > > > > > > > 
> > > > > > > > This is where the new digest_cache LSM comes into play, it offers
> > > > > > > > additional support for new and existing integrity solutions, to make
> > > > > > > > them faster and easier to deploy.
> > > > > > > > 
> > > > > > > > The full documentation with the motivation and the solution details can be
> > > > > > > > found in patch 14.
> > > > > > > > 
> > > > > > > > The IMA integration patch set will be introduced separately. Also a PoC
> > > > > > > > based on the current version of IPE can be provided.
> > > > > > > 
> > > > > > > I'm not sure we want to implement a cache as a LSM.  I'm sure it would
> > > > > > > work, but historically LSMs have provided some form of access control,
> > > > > > > measurement, or other traditional security service.  A digest cache,
> > > > > > > while potentially useful for a variety of security related
> > > > > > > applications, is not a security service by itself, it is simply a file
> > > > > > > digest storage mechanism.
> > > > > > 
> > > > > > Uhm, currently the digest_cache LSM is heavily based on the LSM
> > > > > > infrastructure:
> > > > > 
> > > > > I understand that, but as I said previously, I don't believe that we
> > > > > want to support a LSM which exists solely to provide a file digest
> > > > > cache.  LSMs should be based around the idea of some type of access
> > > > > control, security monitoring, etc.
> > > > > 
> > > > > Including a file digest cache in IMA, or implementing it as a
> > > > > standalone piece of kernel functionality, are still options.  If you
> > > > > want to pursue this, I would suggest that including the digest cache
> > > > > as part of IMA would be the easier of the two options; if it proves to
> > > > > be generally useful outside of IMA, it can always be abstracted out to
> > > > > a general kernel module/subsystem.
> > > > 
> > > > Ok. I thought about IPE and eBPF as potential users. But if you think
> > > > that adding as part of IMA would be easier, I could try to pursue that.
> > > 
> > > It isn't clear to me how this would interact with IPE and/or eBPF, but
> > > if you believe there is value there I would encourage you to work with
> > > those subsystem maintainers.  If the consensus is that a general file
> > > digest cache is useful then you should pursue the digest cache as a
> > > kernel subsystem, just not a LSM.
> 
> > Making it a kernel subsystem would likely mean replicating what the LSM
> > infrastructure is doing, inode (security) blob and being notified about
> > file/directory changes.
> > 
> > I guess I will go for the IMA route...
> 
> This thread brings up an issue that we have been thinking about but
> has been on the back burner.
> 
> Roberto, I'm assuming you have seen our TSEM submissions go by.  Our
> V4 release will be immediately after the Fourth of July holiday week
> here in the states.
> 
> Since TSEM implements a generic security modeling framework for the
> kernel, it ends up implementing a superset of IMA functionality.  That
> required us to implement our own file digest generation and cacheing
> infrastructure.
> 
> Given the trajectory that things are on with respect to security,
> there is only going to be more demand for file digests and their
> associated cacheing.  Doesn't seem like it makes a lot of sense to
> have multiple teams replicating what is largely the same
> functionality.
> 
> If your group would have interest, we would certainly be willing to
> entertain conversations on how we could collaborate to brew up
> something that would be of mutual benefit to everyone who has a need
> for this type of infrastructure.

Hi Greg

sure, I would be happy to give you more details on how the digest_cache
LSM works and how you could make use of it.

> As you noted, consumers of the BPF LSM would also be a clear candidate
> for generic infrastructure.  One of the issues blocking a BPF based
> integrity implementation is that BPF itself is not going to be able
> generate digests on its own.  So it would seem to make sense to have
> whatever gets built have a kfunc accessible API.  Plenty of other
> additional warts on that front as well but getting access to digests
> is the necessary starting point.

Yes, adding a few kfuncs is what I had in mind.

> Given what we have seen with IMA's challenge with respect to overlayfs
> issues and file versioning issues in general, it would seem to be of
> profit to have all these issues addressed uniformally and in one
> place.
> 
> Since virtually everything that is accessing this infrastructure is
> going to be an LSM, we would envision API's out of a common
> infrastructure, invoked by the event handlers of the various LSM's
> interested in integrity information, driving the cache generation and
> maintenance.  That would seem to have all of the benefits of being
> implemented by the LSM infrastructure without necessarily being an
> 'LSM' in and of itself.

Yes, this is exactly how it works. There is a generic API for users to
get a digest cache and query it. The LSM infrastructure is needed for
attaching data to an inode and for being notified of file backend
changes.

The digest_cache LSM makes it transparent for its users the process of
retrieving the reference digest for a given file whose integrity should
be checked, and allows those users to simply query the calculated file
digest.

> We assume that everyone would want to maintain the O(1) lookup
> characteristics of what the LSM inode blob mechanism provides.  We
> would presume that a common cacheing architecture would return a
> pointer to the structure that the digest cache maintains describing
> the various digests associated with the contents of a file, as there
> will be a need for multiple digest support, when an LSM hands the
> cache an inode referencing a file.  An LSM could then place that
> pointer in its own inode blob for future reference.

That was what initially thought and implemented. But I realized that
pinning the digest cache to other inode security blobs makes it more
difficult to free the digest cache (when its reference count goes to
zero).

I opted instead for releasing a digest cache when not in use, and for
introducing a notification mechanism, similar to what LSMs use to
notify about policy changes, which reports when the file backend
changes, so that LSMs can invalidate their decision based on the
affected digest cache. This mechanism is already working in IMA:

https://lore.kernel.org/linux-integrity/20240415161044.2572438-10-roberto.sassu@huaweicloud.com/

> Either that, probably better, stick a pointer into the inode structure
> itself that references it's digest cache object and it would get
> populated by the first event that opens the associated file.

Yes, the digest cache pointer is stored both in the inode that should
be verified with the digest cache, and in the inode the digest cache
was created from. The first reference avoids retrieving the link file-
package every time a digest cache is requested for that inode.

> > Roberto
> 
> So an open invitation to anyone that would want to discuss
> requirements around a common implementation.
> 
> Have a good weekend.

Thanks, the same to you.

Roberto

> As always,
> Dr. Greg
> 
> The Quixote Project - Flailing at the Travails of Cybersecurity
>               https://github.com/Quixote-Project


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ