lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZabvdYTNhj6fiHgl@slm.duckdns.org>
Date: Tue, 16 Jan 2024 11:04:53 -1000
From: Tejun Heo <tj@...nel.org>
To: Naohiro Aota <Naohiro.Aota@....com>
Cc: "jiangshanlai@...il.com" <jiangshanlai@...il.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"kernel-team@...a.com" <kernel-team@...a.com>
Subject: Re: Re: [PATCHSET wq/for-6.8] workqueue: Implement system-wide
 max_active for unbound workqueues

Hello,

On Mon, Jan 15, 2024 at 05:46:07AM +0000, Naohiro Aota wrote:
> CPU: Intel(R) Xeon(R) Platinum 8260 CPU, 96 cores
> NUMA nodes: 2
> RAM: 1024 GB
> 
> However, for another benchmark experiment I'm doing, I booted the machine
> with "numa=off mem=16G" kernel command-line. I admit this is an unusual
> setup...

So, does that end up using only memory from one node while making the kernel
unaware of NUMA topology?

> On that machine, I create a fresh btrfs with "mkfs.btrfs -d raid0 -m raid0
> <devices>" with 6 SSD devices. And, I run the following command on the FS.
> 
> fio --group_reporting --eta=always --eta-interval=30s --eta-newline=30s \
>     --rw=write --fallocate=none \
>     --direct=1 --ioengine=libaio --iodepth=32 \
>     --filesize=100G \
>     --blocksize=64k \
>     --time_based --runtime=300s \
>     --end_fsync=1 \
>     --directory=${MNT} \
>     --name=writer --numjobs=32
> 
> tools/workqueue/wq_dump.py output is pasted at the
> bottom. "btrfs-endio-write" is the workqueue, which had many workers on the
> unpatched kernel.

If so, I'm not sure how meaningful the result is. e.g. The perf would depend
heavily on random factors like which threads end up on which node and so on.
Sure, if we're slow because we're creating huge number of concurrent
workers, that's still a problem but comparing relatively small perf delta
might not be all that meaningful. How much is the result variance in that
setup?

> FYI, without the kernel command-line (i.e, numa=on and all RAM available as
> usual), as shown below, your patch series (v1) improved the performance
> significantly. It is even better than the reverted case.
> 
> - misc-next, numa=on
>   WRITE: bw=1121MiB/s (1175MB/s), 1121MiB/s-1121MiB/s (1175MB/s-1175MB/s), io=332GiB (356GB), run=303030-303030msec
> - misc-next+wq patches, numa=on
>   WRITE: bw=2185MiB/s (2291MB/s), 2185MiB/s-2185MiB/s (2291MB/s-2291MB/s), io=667GiB (717GB), run=312806-312806msec
> - misc-next+wq reverted, numa=on
>   WRITE: bw=1557MiB/s (1633MB/s), 1557MiB/s-1557MiB/s (1633MB/s-1633MB/s), io=659GiB (708GB), run=433426-433426msec

That looks pretty good, right?

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ