[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fbf051ea-5a6b-37d0-e7b7-6513e4da9273@canonical.com>
Date: Sun, 15 Aug 2021 02:47:28 -0700
From: John Johansen <john.johansen@...onical.com>
To: Sergey Senozhatsky <senozhatsky@...omium.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Peter Zijlstra <peterz@...radead.org>,
Tomasz Figa <tfiga@...omium.org>, linux-kernel@...r.kernel.org,
linux-security-module@...r.kernel.org
Subject: Re: apparmor: global buffers spin lock may get contended
On 7/13/21 6:19 AM, Sergey Senozhatsky wrote:
> Hi,
>
> We've notices that apparmor has switched from using per-CPU buffer pool
> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
>
> This seems to be causing some contention on our build machines (with
> quite a bit of cores). Because that global spin lock is a part of the
> stat() sys call (and perhaps some other)
>
> E.g.
>
> - 9.29% 0.00% clang++ [kernel.vmlinux]
> - 9.28% entry_SYSCALL_64_after_hwframe
> - 8.98% do_syscall_64
> - 7.43% __do_sys_newlstat
> - 7.43% vfs_statx
> - 7.18% security_inode_getattr
> - 7.15% apparmor_inode_getattr
> - aa_path_perm
> - 3.53% aa_get_buffer
> - 3.47% _raw_spin_lock
> 3.44% native_queued_spin_lock_slowpath
> - 3.49% aa_put_buffer.part.0
> - 3.45% _raw_spin_lock
> 3.43% native_queued_spin_lock_slowpath
>
> Can we fix this contention?
>
sorry this got filtered to a wrong mailbox. Yes this is something that can
be improved, and was a concern when the switch was made from per-CPU buffers
to the global pool.
We can look into doing a hybrid approach where we can per cpu cache a buffer
from the global pool. The trick will be coming up with when the cached buffer
can be returned so we don't run into the problems that lead to
df323337e507a0009d3db1ea
Powered by blists - more mailing lists