lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250227121207.GA15116@wind.enjellic.com>
Date: Thu, 27 Feb 2025 06:12:07 -0600
From: "Dr. Greg" <greg@...ellic.com>
To: Casey Schaufler <casey@...aufler-ca.com>
Cc: Paul Moore <paul@...l-moore.com>, linux-security-module@...r.kernel.org,
        linux-kernel@...r.kernel.org, jmorris@...ei.org
Subject: Re: [PATCH v4 2/14] Add TSEM specific documentation.

On Tue, Feb 25, 2025 at 07:48:31AM -0800, Casey Schaufler wrote:

Good morning, I hope this note finds the week going well for everyone.

> On 2/25/2025 4:01 AM, Dr. Greg wrote:
> > On Tue, Jan 28, 2025 at 05:23:52PM -0500, Paul Moore wrote:
> >
> > For the record, further documentation of our replies to TSEM technical
> > issues.
> >
> > ...
> >
> > Further, TSEM is formulated on the premise that software teams,
> > as a by product of CI/CD automation and testing, can develop precise
> > descriptions of the security behavior of their workloads.

> I've said it before, and I'll say it again. This premise is
> hopelessly naive. If it was workable you'd be able to use SELinux
> and audit2allow to create perfect security, and it would have been
> done 15 years ago.  The whole idea that you can glean what a
> software system is *supposed* to do from what it *does* flies
> completely in the face of basic security principles.

You view our work as hopelessly naive because you, and perhaps others,
view it through a 45+ year old lens of classic subject/object
mandatory controls that possess only limited dimensionality.

We view it through a lens of 10+ years of developing new multi-scale
methods for modeling alpha2-adrenergic receptor antagonists... :-)

We don't offer this observation just in jest.  If people don't
understand what we mean by this, they should consider the impact that
Singular Value Decomposition methods had when they were brought over
from engineering and applied to machine learning and classification.

A quote from John von Neumann, circa 1949, would seem appropriate:

"It would appear that we have reached the limits of what is
 possible to achieve with computer technology, although one should be
 careful with such statements, as they tend to sound pretty silly in 5
 years."

If anyone spends time understanding the generative functions that we
are using, particularly the task identity model, they will find that
the coefficients that define the permitted behaviors have far more
specificity, with respect to classifying what a system is *supposed*
to do, than the two, possibly three dimensions of classic
subject/object controls.

More specifically to the issues you raise.

Your SeLinux/audit2allow analogy is flawed and isn't a relevant
comparison to what we are implementing.  audit2allow is incapable of
defining a closed set of allowed security behaviors that are
*supposed* to be exhibited by a workload.

The use of audit2allow only generates what can be considered as
possible permitted exceptions to a security model, after the model has
failed and hopefully before people have simply turned off the
infrastructure in frustration because they needed a working system.

Unit testing of a workload under TSEM produces a closed set of high
resolution permitted behaviors generated by the normal functioning of
that workload, in other words all of the security behaviors that are
exibited when the workload is doing what it is *supposed* to do.  TSEM
operates under default deny criteria, so if workload testing is
insufficient in coverage, any unexpressed behaviors will be denied,
thus blocking or alerting on any undesired security behaviors.

I believe our team is unique in these conversations in being the only
group that has ever compiled a kernel with TSEM enabled and actually
spent time running and testing its performance with the trust
orchestrators and modeling tools we provide.  That includes unit
testing of workloads and then running the models developed from those
tests against kernels and application stacks with documented
vulnerabilities.  To determine whether the models can detect
deviations generated by an exploit of those vulnerabilities, from what
the workload is *supposed* to be doing.

If anyone is interested in building and testing TSEM and can
demonstrate that security behaviors, undesired from its training set,
can escape detection we would certainly embrace an example so we can
review why it is occurring and integrate it into our testing and
development framework.

FWIW, a final thought for those reading along at home.

TSEM is not as much an LSM as it is a generic framework for driving
mathematical models over the basis set of information provided by the
LSM hooks.

All of the above starts the conversation on deterministic models, we
can begin argueing about the relevancy of probabilistic and
inferential models at everyone's convenience.  The latter two of which
will probably drive how the industry does security for the next 45
years.

Have a good day.

As always,
Dr. Greg

The Quixote Project - Flailing at the Travails of Cybersecurity
              https://github.com/Quixote-Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ