lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250919135803.556437-1-zhanghongru@xiaomi.com>
Date: Fri, 19 Sep 2025 21:58:03 +0800
From: Hongru Zhang <zhanghongru06@...il.com>
To: stephen.smalley.work@...il.com
Cc: linux-kernel@...r.kernel.org,
	omosnace@...hat.com,
	paul@...l-moore.com,
	selinux@...r.kernel.org,
	zhanghongru06@...il.com,
	zhanghongru@...omi.com
Subject: Re: [PATCH] selinux: Make avc cache slot size configurable during boot

> > Baseline: 512 nodes, 512 buckets, original avc_hash
> > A: 512 nodes, 512 buckets
> > B: 8192 nodes, 8192 buckets
> > C: 8192 nodes, 8192 buckets, original avc_hash
> > D: 8192 nodes, 4096 buckets ("large number" of nodes in limited buckets)
> >
> > 1. /sys/fs/selinux/avc/hash_stats
> >         a. assess the effectiveness of the optimized algorithm based on A, B
> >                 and Baseline. Expect bucket utilization rate: A ~= B > Baseline.
> > 2. total latency of hash operation and table lookup
> >         a. A vs Baseline: expect A is no obvious latency increasing
> >         b. B vs A: expect B is close to A
> >         c. C vs B: expect B is no worse than C
> >         c. D vs C: see if we can save some memory with no obvious latency increasing
>
> Thank you, looking forward to the results. Fun fact: the current
> avc_hash() function logic hasn't been changed since SELinux was first
> merged into Linux 2.6.0-test3.

Yes, I also noticed that. I tried MurmurHash3 and another algorithm (refered to as Muladd below),
it seems performance of Muladd > MurmurHash3 > current algorithm. How
about using Muladd?

Implementation of Muladd:
static inline u32 avc_hash(u32 ssid, u32 tsid, u16 tclass)
{
	return (ssid * 0x9E3779B9 + tsid * 0x85EBCA77 + tclass * 0xC2B2AE35) & (avc_cache_slots - 1);
}

Note: all test results are based on patch "selinux: Make avc cache slot size configurable during boot"

Here are the results:
1. Bucket utilization rate and length of longest chain
+--------------------------+-----------------------------------------+
|                          | Bucket utilization rate / longest chain |
|                          +------------+---------------+------------+
|                          |  original  |  MurmurHash3  |   Muladd   |
+--------------------------+------------+---------------+------------+
|  512 nodes,  512 buckets | 52.5%/7.5  |   60.2%/5.7 * | 58.2%/6.2 *|
+--------------------------+------------+---------------+------------+
| 1024 nodes,  512 buckets | 68.9%/12.1 |   80.2%/9.7   | 82.4%/8.9  |
+--------------------------+------------+---------------+------------+
| 2048 nodes,  512 buckets | 83.7%/19.4 |   93.4%/16.3  | 94.8%/15.2 |
+--------------------------+------------+---------------+------------+
| 8192 nodes, 8192 buckets | 49.5%/11.4 |   60.3%/7.4   | 61.9%/6.6  |
+--------------------------+------------+---------------+------------+

2. avc_search_node latency (total latency of hash operation and table lookup)
+--------------------------+-----------------------------------------+
|                          |   latency of function avc_search_node   |
|                          +------------+---------------+------------+
|                          |  original  |  MurmurHash3  |   Muladd   |
+--------------------------+------------+---------------+------------+
|  512 nodes,  512 buckets |    87ns    |      84ns *   |    79ns *  |
+--------------------------+------------+---------------+------------+
| 1024 nodes,  512 buckets |    97ns    |      96ns     |    91ns    |
+--------------------------+------------+---------------+------------+
| 2048 nodes,  512 buckets |   118ns    |     113ns     |   110ns    |
+--------------------------+------------+---------------+------------+
| 8192 nodes, 8192 buckets |   106ns    |      99ns     |    94ns    |
+--------------------------+------------+---------------+------------+

The values in the starred cells could be because MurmurHash3 has greater
overhead than Muladd.

Details:
url: https://gist.github.com/zhr250/198717da076a808b5cc78762f27be77e

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ