lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 09 Aug 2021 21:40:29 -0700 From: Davidlohr Bueso <dbueso@...e.de> To: Rafael Aquini <aquini@...hat.com> Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>, Manfred Spraul <manfred@...orfullife.com>, Waiman Long <llong@...hat.com> Subject: Re: [PATCH] ipc: replace costly bailout check in sysvipc_find_ipc() On 2021-08-09 13:35, Rafael Aquini wrote: > sysvipc_find_ipc() was left with a costly way to check if the offset > position fed to it is bigger than the total number of IPC IDs in use. > So much so that the time it takes to iterate over /proc/sysvipc/* files > grows exponentially for a custom benchmark that creates "N" SYSV shm > segments and then times the read of /proc/sysvipc/shm (milliseconds): > > 12 msecs to read 1024 segs from /proc/sysvipc/shm > 18 msecs to read 2048 segs from /proc/sysvipc/shm > 65 msecs to read 4096 segs from /proc/sysvipc/shm > 325 msecs to read 8192 segs from /proc/sysvipc/shm > 1303 msecs to read 16384 segs from /proc/sysvipc/shm > 5182 msecs to read 32768 segs from /proc/sysvipc/shm > > The root problem lies with the loop that computes the total amount of > ids > in use to check if the "pos" feeded to sysvipc_find_ipc() grew bigger > than > "ids->in_use". That is a quite inneficient way to get to the maximum > index > in the id lookup table, specially when that value is already provided > by > struct ipc_ids.max_idx. > > This patch follows up on the optimization introduced via commit > 15df03c879836 > ("sysvipc: make get_maxid O(1) again") and gets rid of the > aforementioned > costly loop replacing it by a simpler checkpoint based on > ipc_get_maxidx() > returned value, which allows for a smooth linear increase in time > complexity > for the same custom benchmark: > > 2 msecs to read 1024 segs from /proc/sysvipc/shm > 2 msecs to read 2048 segs from /proc/sysvipc/shm > 4 msecs to read 4096 segs from /proc/sysvipc/shm > 9 msecs to read 8192 segs from /proc/sysvipc/shm > 19 msecs to read 16384 segs from /proc/sysvipc/shm > 39 msecs to read 32768 segs from /proc/sysvipc/shm > > Signed-off-by: Rafael Aquini <aquini@...hat.com> Acked-by: Davidlohr Bueso <dbueso@...e.de>
Powered by blists - more mailing lists