lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Jun 2023 09:23:30 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Chuck Lever III <chuck.lever@...cle.com>
Cc:     open list <linux-kernel@...r.kernel.org>,
        Linux NFS Mailing List <linux-nfs@...r.kernel.org>
Subject: Re: contention on pwq->pool->lock under heavy NFS workload

Hello,

On Thu, Jun 22, 2023 at 03:45:18PM +0000, Chuck Lever III wrote:
> The good news:
> 
> On stock 6.4-rc7:
> 
> fio 8k [r=108k,w=46.9k IOPS]
> 
> On the affinity-scopes-v2 branch (with no other tuning):
> 
> fio 8k [r=130k,w=55.9k IOPS]

Ah, okay, that's probably coming from per-cpu pwq. Didn't expect that to
make that much difference but that's nice.

> The bad news:
> 
> pool->lock is still the hottest lock on the system during the test.
> 
> I'll try some of the alternate scope settings this afternoon.

Yeah, in your system, there's still gonna be one pool shared across all
CPUs. SMT or CPU may behave better but it might make sense to add a way to
further segment the scope so that e.g. one can split a cache domain N-ways.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ