lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 9 Aug 2021 21:48:59 -0400 From: Waiman Long <llong@...hat.com> To: Rafael Aquini <aquini@...hat.com>, linux-kernel@...r.kernel.org Cc: Andrew Morton <akpm@...ux-foundation.org>, Manfred Spraul <manfred@...orfullife.com>, Davidlohr Bueso <dbueso@...e.de>, Waiman Long <llong@...hat.com> Subject: Re: [PATCH] ipc: replace costly bailout check in sysvipc_find_ipc() On 8/9/21 4:35 PM, Rafael Aquini wrote: > sysvipc_find_ipc() was left with a costly way to check if the offset > position fed to it is bigger than the total number of IPC IDs in use. > So much so that the time it takes to iterate over /proc/sysvipc/* files > grows exponentially for a custom benchmark that creates "N" SYSV shm > segments and then times the read of /proc/sysvipc/shm (milliseconds): > > 12 msecs to read 1024 segs from /proc/sysvipc/shm > 18 msecs to read 2048 segs from /proc/sysvipc/shm > 65 msecs to read 4096 segs from /proc/sysvipc/shm > 325 msecs to read 8192 segs from /proc/sysvipc/shm > 1303 msecs to read 16384 segs from /proc/sysvipc/shm > 5182 msecs to read 32768 segs from /proc/sysvipc/shm > > The root problem lies with the loop that computes the total amount of ids > in use to check if the "pos" feeded to sysvipc_find_ipc() grew bigger than > "ids->in_use". That is a quite inneficient way to get to the maximum index > in the id lookup table, specially when that value is already provided by > struct ipc_ids.max_idx. > > This patch follows up on the optimization introduced via commit 15df03c879836 > ("sysvipc: make get_maxid O(1) again") and gets rid of the aforementioned > costly loop replacing it by a simpler checkpoint based on ipc_get_maxidx() > returned value, which allows for a smooth linear increase in time complexity > for the same custom benchmark: > > 2 msecs to read 1024 segs from /proc/sysvipc/shm > 2 msecs to read 2048 segs from /proc/sysvipc/shm > 4 msecs to read 4096 segs from /proc/sysvipc/shm > 9 msecs to read 8192 segs from /proc/sysvipc/shm > 19 msecs to read 16384 segs from /proc/sysvipc/shm > 39 msecs to read 32768 segs from /proc/sysvipc/shm > > Signed-off-by: Rafael Aquini <aquini@...hat.com> > --- > ipc/util.c | 16 ++++------------ > 1 file changed, 4 insertions(+), 12 deletions(-) > > diff --git a/ipc/util.c b/ipc/util.c > index 0027e47626b7..d48d8cfa1f3f 100644 > --- a/ipc/util.c > +++ b/ipc/util.c > @@ -788,21 +788,13 @@ struct pid_namespace *ipc_seq_pid_ns(struct seq_file *s) > static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos, > loff_t *new_pos) > { > - struct kern_ipc_perm *ipc; > - int total, id; > - > - total = 0; > - for (id = 0; id < pos && total < ids->in_use; id++) { > - ipc = idr_find(&ids->ipcs_idr, id); > - if (ipc != NULL) > - total++; > - } > + struct kern_ipc_perm *ipc = NULL; > + int max_idx = ipc_get_maxidx(ids); > > - ipc = NULL; > - if (total >= ids->in_use) > + if (max_idx == -1 || pos > max_idx) > goto out; > > - for (; pos < ipc_mni; pos++) { > + for (; pos <= max_idx; pos++) { > ipc = idr_find(&ids->ipcs_idr, pos); > if (ipc != NULL) { > rcu_read_lock(); The "pos > max_idx" check is redundant and so is not really necessary. Other than that, the patch looks good to me. Reviewed-by: Waiman Long <longman@...hat.com>
Powered by blists - more mailing lists