[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4047eb51-ee6b-46ae-a67b-ce74c54c41e7@flourine.local>
Date: Mon, 13 Jan 2025 14:19:14 +0100
From: Daniel Wagner <dwagner@...e.de>
To: Ming Lei <ming.lei@...hat.com>
Cc: Daniel Wagner <wagi@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <kbusch@...nel.org>, Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
Kashyap Desai <kashyap.desai@...adcom.com>, Sumit Saxena <sumit.saxena@...adcom.com>,
Shivasharan S <shivasharan.srikanteshwara@...adcom.com>, Chandrakanth patil <chandrakanth.patil@...adcom.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>, Nilesh Javali <njavali@...vell.com>,
GR-QLogic-Storage-Upstream@...vell.com, Don Brace <don.brace@...rochip.com>,
"Michael S. Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>, Stefan Hajnoczi <stefanha@...hat.com>,
Eugenio Pérez <eperezma@...hat.com>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Andrew Morton <akpm@...ux-foundation.org>, Thomas Gleixner <tglx@...utronix.de>,
Costa Shulyupin <costa.shul@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Valentin Schneider <vschneid@...hat.com>, Waiman Long <llong@...hat.com>,
Michal Koutný <mkoutny@...e.com>, Frederic Weisbecker <frederic@...nel.org>,
Mel Gorman <mgorman@...e.de>, Hannes Reinecke <hare@...e.de>,
Sridhar Balaraman <sbalaraman@...allelwireless.com>, "brookxu.cn" <brookxu.cn@...il.com>,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org,
megaraidlinux.pdl@...adcom.com, linux-scsi@...r.kernel.org, storagedev@...rochip.com,
virtualization@...ts.linux.dev
Subject: Re: [PATCH v4 8/9] blk-mq: use hk cpus only when
isolcpus=managed_irq is enabled
Hi Ming,
On Sat, Jan 11, 2025 at 11:31:10AM +0800, Ming Lei wrote:
> > What about a commit message like:
> >
> > When isolcpus=managed_irq is enabled, and the last housekeeping CPU for
> > a given hardware context goes offline, there is no CPU left which
> > handles the IOs anymore. If isolated CPUs mapped to this hardware
> > context are online and an application running on these isolated CPUs
> > issue an IO this will lead to stalls.
>
> It isn't correct, the in-tree code doesn't have such stall, no matter if
> IO is issued from HK or isolated CPUs since the managed irq is guaranteed to
> live if any mapped CPU is online.
Yes, it has different properties.
> Please see irq_do_set_affinity():
>
> if (irqd_affinity_is_managed(data) &&
> housekeeping_enabled(HK_TYPE_MANAGED_IRQ)) {
> const struct cpumask *hk_mask;
>
> hk_mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
>
> cpumask_and(&tmp_mask, mask, hk_mask);
> if (!cpumask_intersects(&tmp_mask, cpu_online_mask))
> prog_mask = mask;
> else
> prog_mask = &tmp_mask;
> } else {
> prog_mask = mask;
>
> The whole mask which may include isolated CPUs is only programmed to
> hardware if there isn't any online CPU in `irq_mask & hk_mask`.
This is not what I try to achieve here. The main motivation with this
series is that isolated CPUs are never serving IRQs.
> > I was talking about implementing the feature which would remap the
> > isolated CPUs to online hardware context when the current hardware
> > context goes offline. I didn't find a solution which I think would be
> > worth presenting. All involved some sort of locking/refcounting in the
> > hotpath, which I think we should just avoid.
>
> I understand the trouble, but it is still one improvement from user
> viewpoint instead of feature since the interface of 'isolcpus=manage_irq'
> isn't changed.
Ah, I understood you wrong. I didn't want to upset you. I thought you
were fine by changing how managed_irq works.
> > Indeed, I forgot to update the documentation. I'll update it accordingly.
>
> It isn't documentation thing, it breaks the no-regression policy, which crosses
> our red-line.
>
> If you really want to move on, please add one new kernel command
> line with documenting the new usage which requires applications to
> offline CPU in order.
Sure, I'll bring the separate command line option back.
Thanks,
Daniel
Powered by blists - more mailing lists