lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ae9abe4c-010a-41ff-be44-1d52a331eb11@flourine.local>
Date: Fri, 10 Jan 2025 10:21:49 +0100
From: Daniel Wagner <dwagner@...e.de>
To: Ming Lei <ming.lei@...hat.com>
Cc: Daniel Wagner <wagi@...nel.org>, Jens Axboe <axboe@...nel.dk>, 
	Keith Busch <kbusch@...nel.org>, Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>, 
	Kashyap Desai <kashyap.desai@...adcom.com>, Sumit Saxena <sumit.saxena@...adcom.com>, 
	Shivasharan S <shivasharan.srikanteshwara@...adcom.com>, Chandrakanth patil <chandrakanth.patil@...adcom.com>, 
	"Martin K. Petersen" <martin.petersen@...cle.com>, Nilesh Javali <njavali@...vell.com>, 
	GR-QLogic-Storage-Upstream@...vell.com, Don Brace <don.brace@...rochip.com>, 
	"Michael S. Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>, 
	Paolo Bonzini <pbonzini@...hat.com>, Stefan Hajnoczi <stefanha@...hat.com>, 
	Eugenio Pérez <eperezma@...hat.com>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, 
	Andrew Morton <akpm@...ux-foundation.org>, Thomas Gleixner <tglx@...utronix.de>, 
	Costa Shulyupin <costa.shul@...hat.com>, Juri Lelli <juri.lelli@...hat.com>, 
	Valentin Schneider <vschneid@...hat.com>, Waiman Long <llong@...hat.com>, 
	Michal Koutný <mkoutny@...e.com>, Frederic Weisbecker <frederic@...nel.org>, 
	Mel Gorman <mgorman@...e.de>, Hannes Reinecke <hare@...e.de>, 
	Sridhar Balaraman <sbalaraman@...allelwireless.com>, "brookxu.cn" <brookxu.cn@...il.com>, 
	linux-kernel@...r.kernel.org, linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org, 
	megaraidlinux.pdl@...adcom.com, linux-scsi@...r.kernel.org, storagedev@...rochip.com, 
	virtualization@...ts.linux.dev
Subject: Re: [PATCH v4 8/9] blk-mq: use hk cpus only when
 isolcpus=managed_irq is enabled

Hi Ming,

On Fri, Dec 20, 2024 at 04:54:21PM +0800, Ming Lei wrote:
> On Thu, Dec 19, 2024 at 04:38:43PM +0100, Daniel Wagner wrote:
> 
> > When isolcpus=managed_irq is enabled all hardware queues should run on
> > the housekeeping CPUs only. Thus ignore the affinity mask provided by
> > the driver.
> 
> Compared with in-tree code, the above words are misleading.
> 
> - irq core code respects isolated CPUs by trying to exclude isolated
> CPUs from effective masks
> 
> - blk-mq won't schedule blockd on isolated CPUs

I see your point, the commit should highlight the fact when an
application is issuing an I/O, this can lead to stalls.

What about a commit message like:

  When isolcpus=managed_irq is enabled, and the last housekeeping CPU for
  a given hardware context goes offline, there is no CPU left which
  handles the IOs anymore. If isolated CPUs mapped to this hardware
  context are online and an application running on these isolated CPUs
  issue an IO this will lead to stalls.

  The kernel will not schedule IO to isolated CPUS thus this avoids IO
  stalls.

  Thus issue a warning when housekeeping CPUs are offlined for a hardware
  context while there are still isolated CPUs online.

> If application aren't run on isolated CPUs, IO interrupt usually won't
> be triggered on isolated CPUs, so isolated CPUs are _not_ ignored.

FWIW, the 'usually' part is what made our customers nervous. They saw
some IRQ noise on the isolated CPUs with their workload and reported
with these changes all was good. Unfortunately, we never got the hands
on the workload, hard to say what was causing it.

> > On Thu, Dec 19, 2024 at 05:20:44PM +0800, Ming Lei wrote:
> > > > +	cpumask_andnot(isol_mask,
> > > > +		       cpu_possible_mask,
> > > > +		       housekeeping_cpumask(HK_TYPE_MANAGED_IRQ));
> > > > +
> > > > +	for_each_cpu(cpu, isol_mask) {
> > > > +		qmap->mq_map[cpu] = qmap->queue_offset + queue;
> > > > +		queue = (queue + 1) % qmap->nr_queues;
> > > > +	}
> > > 
> > > Looks the IO hang issue in V3 isn't addressed yet, is it?
> > > 
> > > https://lore.kernel.org/linux-block/ZrtX4pzqwVUEgIPS@fedora/
> > 
> > I've added an explanation in the cover letter why this is not
> > addressed. From the cover letter:
> > 
> > I've experimented for a while and all solutions I came up were horrible
> > hacks (the hotpath needs to be touched) and I don't want to slow down all
> > other users (which are almost everyone). IMO, it's just not worth trying
> 
> IMO, this patchset is one improvement on existed best-effort approach, which
> works fine most of times, so why you do think it slows down everyone?

I was talking about implementing the feature which would remap the
isolated CPUs to online hardware context when the current hardware
context goes offline. I didn't find a solution which I think would be
worth presenting. All involved some sort of locking/refcounting in the
hotpath, which I think we should just avoid.

> > to fix this corner case. If the user is using isolcpus and does CPU
> > hotplug, we can expect that the user can also first offline the isolated
> > CPUs. I've discussed this topic during ALPSS and the room came to the
> > same conclusion. Thus I just added a patch which issues a warning that
> > IOs are likely to hang.
> 
> If the change need userspace cooperation for using 'managed_irq', the exact
> behavior need to be documented in both this commit and Documentation/admin-guide/kernel-parameters.txt,
> instead of cover-letter only.
> 
> But this patch does cause regression for old applications which can't
> follow the new introduced rule:
> 
> 	```
> 	If the user is using isolcpus and does CPU hotplug, we can expect that the
> 	user can also first offline the isolated CPUs.
> 	```

Indeed, I forgot to update the documentation. I'll update it accordingly.

Thanks,
Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ