[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b30e8d56703dfd84778fa73845eaa1ec@paul-moore.com>
Date: Thu, 16 Oct 2025 17:18:57 -0400
From: Paul Moore <paul@...l-moore.com>
To: Hongru Zhang <zhanghongru06@...il.com>, stephen.smalley.work@...il.com, omosnace@...hat.com
Cc: linux-kernel@...r.kernel.org, selinux@...r.kernel.org, zhanghongru@...omi.com
Subject: Re: [PATCH v3 1/2] selinux: Make avc cache slot size configurable during boot
On Sep 26, 2025 Hongru Zhang <zhanghongru06@...il.com> wrote:
>
> On mobile device high-load situations, permission check can happen
> more than 90,000/s (8 core system). With default 512 cache nodes
> configuration, avc cache miss happens more often and occasionally
> leads to long time (>2ms) irqs off on both big and little cores,
> which decreases system real-time capability.
>
> An actual call stack is as follows:
> => avc_compute_av
> => avc_perm_nonode
> => avc_has_perm_noaudit
> => selinux_capable
> => security_capable
> => capable
> => __sched_setscheduler
> => do_sched_setscheduler
> => __arm64_sys_sched_setscheduler
> => invoke_syscall
> => el0_svc_common
> => do_el0_svc
> => el0_svc
> => el0t_64_sync_handler
> => el0t_64_sync
>
> Although we can expand avc nodes through /sys/fs/selinux/cache_threshold
> to mitigate long time irqs off, hash conflicts make the bucket average
> length longer because of the fixed size of cache slots, leading to
> avc_search_node latency increase.
>
> Make avc cache slot size also configurable, and with fine tuning, we can
> mitigate long time irqs off with slightly avc_search_node performance
> regression.
>
> Theoretically, the main overhead is memory consumption.
>
> avc_search_node avg latency test results (about 100,000,000 times) on
> Qcom SM8750, 6.6.30-android15-8:
>
> Case 1:
> +---------+---------------------+------------------------+
> | | no-patch (512/512) | with-patch (512/512) |
> +---------+---------------------+------------------------+
> | latency | 85 ns | 87 ns |
> +---------+---------------------+------------------------+
>
> Case 2:
> +---------+---------------------+------------------------+
> | | no-patch (8192/512) | with-patch (8192/8192) |
> +---------+---------------------+------------------------+
> | latency | 277 ns | 106 ns |
> +---------+---------------------+------------------------+
>
> Case 1 shows 512 nodes configuration has ~2% performance regression
> with patch.
> Case 2 shows 8192 nodes configuration has ~61% latency benifit with
> patch.
>
> Signed-off-by: Hongru Zhang <zhanghongru@...omi.com>
> Reviewed-by: Stephen Smalley <stephen.smalley.work@...il.com>
> ---
> .../admin-guide/kernel-parameters.txt | 4 ++
> security/selinux/avc.c | 68 +++++++++++++------
> 2 files changed, 50 insertions(+), 22 deletions(-)
I would expect the number of active AVC nodes, and AVC churn in general,
to be very policy dependent; some policies and use cases simply result in
more AVC nodes than others. With that in mind, I'm wondering if instead
of using a kernel command line parameter to specify the number of AVC
buckets, we should instead include an AVC size "hint" in the policy that
we can use to size the AVC when loading a new policy.
Thoughts?
I think it would be important to consider it strictly as a "hint" as
that would make life easier, e.g. if the previous policy hinted at a
larger AVC we may not want to bother with reducing the number of buckets.
I would suggest starting with an implementation that uses the hint as a
power of two for the number of AVC slots/buckets, with a value of '0'
indicating a default value (512 slots, e.g. '2^9').
--
paul-moore.com
Powered by blists - more mailing lists