lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 8 Jan 2024 15:45:18 +0800
From: Wen Gu <guwen@...ux.alibaba.com>
To: Hillf Danton <hdanton@...a.com>, Uladzislau Rezki <urezki@...il.com>
Cc: linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 04/11] mm: vmalloc: Remove global vmap_area_root
 rb-tree



On 2024/1/7 14:59, Hillf Danton wrote:
> On Sat, 6 Jan 2024 17:36:23 +0100 Uladzislau Rezki <urezki@...il.com>
>>>
>>> Thank you! I tried the patch, and it seems that the wait for rwlock_t
>>> also exists, as much as using spinlock_t. (The flamegraph is attached.
>>> Not sure why the read_lock waits so long, given that there is no frequent
>>> write_lock competition)
>>>
>>>                 vzalloced shmem(spinlock_t)   vzalloced shmem(rwlock_t)
>>> Requests/sec         583729.93                     460007.44
>>>
>>> So I guess the overhead in finding vmap area is inevitable here and the
>>> original spin_lock is fine in this series.
>>>
>> I have also noticed a erformance difference between rwlock and spinlock.
>> So, yes. This is what we need to do extra if CONFIG_HARDENED_USERCOPY is
>> set, i.e. find a VA.
> 
> See if read bias helps to understand the gap between spinlock and rwlock.
> 
> --- x/kernel/locking/qrwlock.c
> +++ y/kernel/locking/qrwlock.c
> @@ -23,7 +23,7 @@ void __lockfunc queued_read_lock_slowpat
>   	/*
>   	 * Readers come here when they cannot get the lock without waiting
>   	 */
> -	if (unlikely(in_interrupt())) {
> +	if (1) {
>   		/*
>   		 * Readers in interrupt context will get the lock immediately
>   		 * if the writer is just waiting (not holding the lock yet),

Thank you for the idea! Hillf.

IIUC, the change makes read operations more likely to acquire lock and
modified fairness to favor reading.

The test in my scenario shows:

vzalloced shmem with     spinlock_t      rwlock_t      rwlock_t(with above change)
Requests/sec              564961.29     442534.33     439733.31

In addition to read bias, there seems to be other factors that cause the
gap, but I haven't figured it out yet..

Thanks,
Wen Gu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ