lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <290017a8-d943-570f-1f90-acecf1c075a1@schaufler-ca.com>
Date:   Wed, 3 Jun 2020 10:20:31 -0700
From:   Casey Schaufler <casey@...aufler-ca.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Paul Moore <paul@...l-moore.com>
Cc:     selinux@...r.kernel.org,
        LSM List <linux-security-module@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Casey Schaufler <casey@...aufler-ca.com>
Subject: Re: [GIT PULL] SELinux patches for v5.8

On 6/2/2020 5:31 PM, Linus Torvalds wrote:
> On Mon, Jun 1, 2020 at 6:07 PM Paul Moore <paul@...l-moore.com> wrote:
>> - A number of improvements to various SELinux internal data structures
>> to help improve performance.  We move the role transitions into a hash
>> table.  In the content structure we shift from hashing the content
>> string (aka SELinux label) to the structure itself, when it is valid.
>> This last change not only offers a speedup, but it helps us simplify
>> the code some as well.
> Side note since you mention performance work: in the past when I've
> looked at SELinux performance (generally as part of pathname lookup
> etc VFS loads), the biggest cost by far was that all the SELinux data
> structures take a ton of cache misses.
>
> Yes, some of the hashing shows up in the profiles, but _most_ of it
> was loading the data from inode->i_security etc.

The whole security blob scheme is based on the 20th century notion
that security beyond user identification lives in the realm of the
lunatic fringe. The use of security modules was expected to be rare.
Clearly we've moved on from that. A system without a security module
is the exception, not the rule.

> And the reason seemed to be that every single inode ends up having a
> separately allocated "struct inode_security_struct" (aka "isec"). Even
> if the contents are often all exactly the same for a large set of
> inodes that thus _could_ conceptually share the data.

There's state information as well as the access control attributes
in the SELinux and Smack blobs.

> Now, it used to be - before being able to stack security layers -
> SElinux would control that pointer, and it could have done some kind
> of sharing scheme with copy-on-write behavior (the way we do 'struct
> cred' for processes), and it would have caused a much smaller cache
> footprint (and thus likely much fewer cache misses).
>
> These days, that sharing of the i_security pointer across different
> security layers makes that sound really really painful.

Dealing with shared creds is reasonably painful, too.

> But I do wonder if anybody in selinux land (or general security
> subsystem land) has been thinking of maybe at least having a "this
> inode has no special labeling" marker that could possibly avoid having
> all those extra allocations.

The fine granularity of SELinux policy isn't conducive to that.
There's also the state information to deal with. 

> Because it really does end up being visible in profiles how costly it
> is to look up any data behind inode->i_security.

We could have inode->i_security be the blob, rather than a pointer to it.
That will have its own performance issues. I proposed doing just that in
2013, to the resounding sound of crickets.

I am painfully aware of the performance considerations that influence
the way we've had to implement security modules. But these
considerations have always emphasized the performance when security
modules are *not* being used, not when they are. I am encouraged to see
that being questioned. I believe that we can do better, but that some
of the constraints we've had to work under need to come off before we
can do a good job of it.

>                Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ