lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 26 May 2013 10:59:55 -0700
From:	Casey Schaufler <casey@...aufler-ca.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	James Morris <jmorris@...ei.org>,
	Al Viro <viro@...iv.linux.org.uk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Eric Paris <eparis@...hat.com>,
	James Morris <james.l.morris@...cle.com>,
	Casey Schaufler <casey@...aufler-ca.com>
Subject: Re: Stupid VFS name lookup interface..

On 5/25/2013 10:19 PM, Linus Torvalds wrote:
> On Sat, May 25, 2013 at 10:04 PM, James Morris <jmorris@...ei.org> wrote:
>> On Sat, 25 May 2013, Linus Torvalds wrote:
>>
>>> But I haven't even looked at what non-selinux setups do to
>>> performance. Last time I tried Ubuntu (they still use apparmor, no?),
>>> "make modules_install ; make install" didn't work for the kernel, and
>>> if the Ubuntu people don't want to support kernel engineers, I
>>> certainly am not going to bother with them. Who uses smack?
>> Tizen, perhaps a few others.
> Btw, it really would be good if security people started realizing that
> performance matters. It's annoying to see the security lookups cause
> 50% performance degradations on pathname lookup (and no, I'm not
> exaggerating, that's literally what it was before we fixed it - and
> no, by "we" I don't mean security people).

I think that we have a pretty good idea that performance matters.
I have never, not even once, tried to introduce a "security" feature
that was not subject to an objection based on its performance impact.
We are also extremely tuned into the desire that when our security
features are not present the impact of their potential availability
has to be zero.

The whole secid philosophy comes out of the need to keep security out
of other people's way. It has performance impact. Sure, SELinux
hashes lookups, but a blob pointer gets you right where you want to be.
When we are constrained in unnatural ways there are going to be
consequences. Performance is one. Code complexity is another.

One need look no further than the recent discussions regarding Paul
Moore's suggested changes to sk_buff to see just how seriously
performance considerations impact security development. Because the
security data in the sk_buff is limited to a u32 instead of a blob
pointer labeled networking performance is seriously worse than
unlabeled. Paul has gone to heroic lengths to come up with a change
that meets all of the performance criteria, and has still been
rejected. Not because the stated issues haven't been addressed, but
because someone else might have changes that "need those cache lines"
in the future.

Two of the recent Smack changes have been performance improvements.
Smack is currently under serious scrutiny for performance as it is
heading into small machines.

I'm not saying we can't do better, or that we (at least I) don't
appreciate any help we can get. We are, however, bombarded with
concern over the performance impact of what we're up to. All too
often it's not constructive criticism. Sometimes it is downright
hostile.

>
> There's a really simple benchmark that is actually fairly relevant:
> build a reasonable kernel ("make localmodconfig" or similar - not the
> normal distro kernel that has everything enabled) without debugging or
> other crud enabled, run that kernel, and then re-build the fully built
> kernel to make sure it's all in the disk cache. Then, when you don't
> need any IO, and don't need to recompile anything, do a "make -j".
>
> Assuming you have a reasonably modern desktop machine, it should take
> something like 5-10 seconds, of which almost everything is just "make"
> doing lots of stat() calls to see that everything is fully built. If
> it takes any longer, you're doing something wrong.
>
> Once you are at that point, just do "perf record -f -e cycles:pp make
> -j" and then "perf report" on the thing.
>
> (The "-e cycles:pp" is not necessary for the rough information, but it
> helps if you then want to go and annotate the assembler to see where
> the costs come from).
>
> If you see security functions at the top, you know that the security
> routines take more time than the real work the kernel is doing, and
> should realize that that would be a problem.
>
> Right now (zooming into the kernel only - ignoring the fact that make
> really spends a fair amount of time in user space) I get
>
>   9.79%      make  [k] __d_lookup_rcu
>   5.48%      make  [k] link_path_walk
>   2.94%      make  [k] avc_has_perm_noaudit
>   2.47%      make  [k] selinux_inode_permission
>   2.25%      make  [k] path_lookupat
>   1.89%      make  [k] generic_fillattr
>   1.50%      make  [k] lookup_fast
>   1.27%      make  [k] copy_user_generic_string
>   1.17%      make  [k] generic_permission
>   1.15%      make  [k] dput
>   1.12%      make  [k] inode_has_perm.constprop.58
>   1.11%      make  [k] __inode_permission
>   1.08%      make  [k] kmem_cache_alloc
>   ...
>
> so the permission checking is certainly quite noticeable, but it's by
> no means dominant. This is with both of the patches I've posted, but
> the numbers weren't all that different before (inode_has_perm and
> selinux_inode_permission used to be higher up in the list, now
> avc_has_perm_noaudit is the top selinux cost - which actually makes
> some amount of sense).
>
> So it's easy to have a fairly real-world performance profile that
> shows path lookup costs on a real test.
>
>                   Linus
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ