[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <14fab6a7-f7b5-2f9d-e01f-923b1c36816d@grimberg.me>
Date: Fri, 2 Oct 2020 13:20:35 -0700
From: Sagi Grimberg <sagi@...mberg.me>
To: Christoph Hellwig <hch@....de>
Cc: Leon Romanovsky <leon@...nel.org>,
Doug Ledford <dledford@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <kbusch@...nel.org>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-rdma@...r.kernel.org
Subject: Re: [PATCH blk-next 1/2] blk-mq-rdma: Delete not-used multi-queue
RDMA map queue code
>> Yes, basically usage of managed affinity caused people to report
>> regressions not being able to change irq affinity from procfs.
>
> Well, why would they change it? The whole point of the infrastructure
> is that there is a single sane affinity setting for a given setup. Now
> that setting needed some refinement from the original series (e.g. the
> current series about only using housekeeping cpus if cpu isolation is
> in use). But allowing random users to modify affinity is just a receipe
> for a trainwreck.
Well allowing people to mangle irq affinity settings seem to be a hard
requirement from the discussions in the past.
> So I think we need to bring this back ASAP, as doing affinity right
> out of the box is an absolute requirement for sane performance without
> all the benchmarketing deep magic.
Well, it's hard to say that setting custom irq affinity settings is
deemed non-useful to anyone and hence should be prevented. I'd expect
that irq settings have a sane default that works and if someone wants to
change it, it can but there should be no guarantees on optimal
performance. But IIRC this had some dependencies on drivers and some
more infrastructure to handle dynamic changes...
Powered by blists - more mailing lists