[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zzd6bLND-dwE-xZb@slm.duckdns.org>
Date: Fri, 15 Nov 2024 06:44:28 -1000
From: Tejun Heo <tj@...nel.org>
To: Wangyang Guo <wangyang.guo@...el.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>, linux-kernel@...r.kernel.org,
Tim Chen <tim.c.chen@...ux.intel.com>, tianyou.li@...el.com,
pan.deng@...el.com
Subject: Re: [PATCH] workqueue: Reduce expensive locks for unbound workqueue
On Fri, Nov 15, 2024 at 01:49:36PM +0800, Wangyang Guo wrote:
> For unbound workqueue, pwqs usually map to just a few pools. Most of
> the time, pwqs will be linked sequentially to wq->pwqs list by cpu
> index. Usually, consecutive CPUs have the same workqueue attribute
> (e.g. belong to the same NUMA node). This makes pwqs with the same
> pool cluster together in the pwq list.
>
> Only do lock/unlock if the pool has changed in flush_workqueue_prep_pwqs().
> This reduces the number of expensive lock operations.
>
> The performance data shows this change boosts FIO by 65x in some cases
> when multiple concurrent threads write to xfs mount points with fsync.
>
> FIO Benchmark Details
> - FIO version: v3.35
> - FIO Options: ioengine=libaio,iodepth=64,norandommap=1,rw=write,
> size=128M,bs=4k,fsync=1
> - FIO Job Configs: 64 jobs in total writing to 4 mount points (ramdisks
> formatted as xfs file system).
> - Kernel Codebase: v6.12-rc5
> - Test Platform: Xeon 8380 (2 sockets)
>
> Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
> Signed-off-by: Wangyang Guo <wangyang.guo@...el.com>
Applied to wq/for-6.13.
Thanks.
--
tejun
Powered by blists - more mailing lists