[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZoGSJWMD9v1BxUDb@slm.duckdns.org>
Date: Sun, 30 Jun 2024 07:13:09 -1000
From: Tejun Heo <tj@...nel.org>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
Waiman Long <longman@...hat.com>, Mike Snitzer <snitzer@...nel.org>,
Laurence Oberman <loberman@...hat.com>,
Jonathan Brassow <jbrassow@...hat.com>,
Ming Lei <minlei@...hat.com>, Ondrej Kozina <okozina@...hat.com>,
Milan Broz <gmazyland@...il.com>, linux-kernel@...r.kernel.org,
dm-devel@...ts.linux.dev
Subject: Re: dm-crypt performance regression due to workqueue changes
Hello,
On Sat, Jun 29, 2024 at 08:15:56PM +0200, Mikulas Patocka wrote:
> With 6.5, we get 3600MiB/s; with 6.6 we get 1400MiB/s.
>
> The reason is that virt-manager by default sets up a topology where we
> have 16 sockets, 1 core per socket, 1 thread per core. And that workqueue
> patch avoids moving work items across sockets, so it processes all
> encryption work only on one virtual CPU.
>
> The performance degradation may be fixed with "echo 'system'
> >/sys/module/workqueue/parameters/default_affinity_scope" - but it is
> regression anyway, as many users don't know about this option.
>
> How should we fix it? There are several options:
> 1. revert back to 'numa' affinity
> 2. revert to 'numa' affinity only if we are in a virtual machine
> 3. hack dm-crypt to set the 'numa' affinity for the affected workqueues
> 4. any other solution?
Do you happen to know why libvirt is doing that? There are many other
implications to configuring the system that way and I don't think we want to
design kernel behaviors to suit topology information fed to VMs which can be
arbitrary.
Thanks.
--
tejun
Powered by blists - more mailing lists