[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230322164130.CmC_J49n@linutronix.de>
Date: Wed, 22 Mar 2023 17:41:30 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Crystal Wood <swood@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
John Keeping <john@...anate.com>,
linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: rtmutex, pi_blocked_on, and blk_flush_plug()
On 2023-03-04 23:39:57 [-0600], Crystal Wood wrote:
> > This still leaves the problem vs. io_wq_worker_sleeping() and it's
> > running() counterpart after schedule().
>
> The closest thing I can see to a problem there is io_wqe_dec_running()->
> io_queue_worker_create()->io_wq_cancel_tw_create()->kfree(), but that only
> happens with func == create_worker_cont(), and io_wqe_dec_running() uses
> create_worker_cb().
So we may good then. The while loop in io_wq_cancel_tw_create() worries
me a little. I am not sure if only the submitted work gets cancel or
maybe other as well, including the one leading the kfree.
> Are there any workloads I could run to stress out that path (with my
> asserts in place)?
None that I can think of. Maybe something from the io-ring test suite.
But then you may need to bend to code to get the task_add() to fail.
Maybe Jens knows something.
> -Scott
Sebastian
Powered by blists - more mailing lists