[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d48c7e95-e21e-dcdc-a776-8ae7bed566cb@kernel.dk>
Date: Fri, 12 Aug 2022 12:02:55 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Josef Bacik <josef@...icpanda.com>,
Chris Murphy <lists@...orremedies.com>
Cc: Paolo Valente <paolo.valente@...aro.org>,
Btrfs BTRFS <linux-btrfs@...r.kernel.org>,
Linux-RAID <linux-raid@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Jan Kara <jack@...e.cz>
Subject: Re: stalling IO regression since linux 5.12, through 5.18
On 8/12/22 11:59 AM, Josef Bacik wrote:
> On Fri, Aug 12, 2022 at 12:05 PM Chris Murphy <lists@...orremedies.com> wrote:
>>
>>
>>
>> On Wed, Aug 10, 2022, at 3:34 PM, Chris Murphy wrote:
>>> Booted with cgroup_disable=io, and confirmed cat
>>> /sys/fs/cgroup/cgroup.controllers does not list io.
>>
>> The problem still reproduces with the cgroup IO controller disabled.
>>
>> On a whim, I decided to switch the IO scheduler from Fedora's default bfq for rotating drives to mq-deadline. The problem does not reproduce for 15+ hours, which is not 100% conclusive but probably 99% conclusive. I then switched live while running the workload to bfq on all eight drives, and within 10 minutes the system cratered, all new commands just hang. Load average goes to triple digits, i/o wait increasing, i/o pressure for the workload tasks to 100%, and IO completely stalls to zero. I was able to switch only two of the drive queues back to mq-deadline and then lost responsivness in that shell and had to issue sysrq+b...
>>
>> Before that I was able to extra sysrq+w and sysrq+t.
>> https://drive.google.com/file/d/16hdQjyBnuzzQIhiQT6fQdE0nkRQJj7EI/view?usp=sharing
>>
>> I can't tell if this is a bfq bug, or if there's some negative interaction between bfq and scsi or megaraid_sas. Obviously it's rare because otherwise people would have been falling over this much sooner. But at this point there's strong correlation that it's bfq related and is a kernel regression that's been around since 5.12.0 through 5.18.0, and I suspect also 5.19.0 but it's being partly masked by other improvements.
>
> This matches observations we've had internally (inside Facebook) as
> well as my continual integration performance testing. It should
> probably be looked into by the BFQ guys as it was working previously.
> Thanks,
5.12 has a few BFQ changes:
Jan Kara:
bfq: Avoid false bfq queue merging
bfq: Use 'ttime' local variable
bfq: Use only idle IO periods for think time calculations
Jia Cheng Hu
block, bfq: set next_rq to waker_bfqq->next_rq in waker injection
Paolo Valente
block, bfq: use half slice_idle as a threshold to check short ttime
block, bfq: increase time window for waker detection
block, bfq: do not raise non-default weights
block, bfq: avoid spurious switches to soft_rt of interactive queues
block, bfq: do not expire a queue when it is the only busy one
block, bfq: replace mechanism for evaluating I/O intensity
block, bfq: re-evaluate convenience of I/O plugging on rq arrivals
block, bfq: fix switch back from soft-rt weitgh-raising
block, bfq: save also weight-raised service on queue merging
block, bfq: save also injection state on queue merging
block, bfq: make waker-queue detection more robust
Might be worth trying to revert those from 5.12 to see if they are
causing the issue? Jan, Paolo - does this ring any bells?
--
Jens Axboe
Powered by blists - more mailing lists