[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f553d34-9ada-426c-4847-c7cd1aba64a8@grimberg.me>
Date: Mon, 17 Apr 2023 16:45:37 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Li Feng <fengli@...rtx.com>, Keith Busch <kbusch@...nel.org>,
Jens Axboe <axboe@...com>, Christoph Hellwig <hch@....de>,
"open list:NVM EXPRESS DRIVER" <linux-nvme@...ts.infradead.org>,
open list <linux-kernel@...r.kernel.org>
Cc: lifeng1519@...il.com
Subject: Re: [PATCH] nvme/tcp: Add support to set the tcp worker cpu affinity
Hey Li,
> The default worker affinity policy is using all online cpus, e.g. from 0
> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
> have a bad performance.
>
> This patch adds a module parameter to set the cpu affinity for the nvme-tcp
> socket worker threads. The parameter is a comma separated list of CPU
> numbers. The list is parsed and the resulting cpumask is used to set the
> affinity of the socket worker threads. If the list is empty or the
> parsing fails, the default affinity is used.
I can see how this may benefit a specific set of workloads, but I have a
few issues with this.
- This is exposing a user interface for something that is really
internal to the driver.
- This is something that can be misleading and could be tricky to get
right, my concern is that this would only benefit a very niche case.
- If the setting should exist, it should not be global.
- I prefer not to introduce new modparams.
- I'd prefer to find a way to support your use-case without introducing
a config knob for it.
- It is not backed by performance improvements, but more importantly
does not cover any potential regressions in key metrics (bw/iops/lat)
or lack there of.
Powered by blists - more mailing lists