[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240107065942.3141-1-hdanton@sina.com>
Date: Sun, 7 Jan 2024 14:59:42 +0800
From: Hillf Danton <hdanton@...a.com>
To: Uladzislau Rezki <urezki@...il.com>,
Wen Gu <guwen@...ux.alibaba.com>
Cc: linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 04/11] mm: vmalloc: Remove global vmap_area_root rb-tree
On Sat, 6 Jan 2024 17:36:23 +0100 Uladzislau Rezki <urezki@...il.com>
> >
> > Thank you! I tried the patch, and it seems that the wait for rwlock_t
> > also exists, as much as using spinlock_t. (The flamegraph is attached.
> > Not sure why the read_lock waits so long, given that there is no frequent
> > write_lock competition)
> >
> > vzalloced shmem(spinlock_t) vzalloced shmem(rwlock_t)
> > Requests/sec 583729.93 460007.44
> >
> > So I guess the overhead in finding vmap area is inevitable here and the
> > original spin_lock is fine in this series.
> >
> I have also noticed a erformance difference between rwlock and spinlock.
> So, yes. This is what we need to do extra if CONFIG_HARDENED_USERCOPY is
> set, i.e. find a VA.
See if read bias helps to understand the gap between spinlock and rwlock.
--- x/kernel/locking/qrwlock.c
+++ y/kernel/locking/qrwlock.c
@@ -23,7 +23,7 @@ void __lockfunc queued_read_lock_slowpat
/*
* Readers come here when they cannot get the lock without waiting
*/
- if (unlikely(in_interrupt())) {
+ if (1) {
/*
* Readers in interrupt context will get the lock immediately
* if the writer is just waiting (not holding the lock yet),
Powered by blists - more mailing lists