[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9b092ca49e9b5415772cd950a3c12584@mail.gmail.com>
Date: Fri, 26 Nov 2021 16:55:17 +0530
From: Kashyap Desai <kashyap.desai@...adcom.com>
To: John Garry <john.garry@...wei.com>, axboe@...nel.dk
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
ming.lei@...hat.com, hare@...e.de
Subject: RE: [PATCH RFT 0/3] blk-mq: Optimise blk_mq_queue_tag_busy_iter() for
shared tags
> >
> >
> > I will continue testing and let you know how it goes.
>
> ok, good to know, thanks. But I would still like to know what is
> triggering
> blk_mq_queue_tag_busy_iter() so often. Indeed, as mentioned in this cover
> letter, this function was hardly optimised before for shared sbitmap.
If I give "--disk_util=0" option in my fio run, caller of "
blk_mq_queue_tag_busy_iter" reduced drastically.
As part of <fio> run, application call diskutils operations and it is almost
same as doing "cat /proc/stats" in loop.
Looking at fio code, it call diskstats every 250 msec. Here is sample fio
logs -
diskutil 87720 /sys/block/sdb/stat: stat read ok? 0
diskutil 87720 update io ticks
diskutil 87720 open stat file: /sys/block/sdb/stat
diskutil 87720 /sys/block/sdb/stat: 127853173 0 1022829056 241827073
0 0 0 0 255 984012 241827073 0 0
0 0 0 0
There is one more call trace, but not sure why it is getting executed in my
test. Below path does not execute so frequently but it consumes cpu (not
noticeable on my setup)
kthread
worker_thread
process_one_work
blk_mq_timeout_work
blk_mq_queue_tag_busy_iter
bt_iter
blk_mq_find_and_get_req
_raw_spin_lock_irqsave
native_queued_spin_lock_slowpath
This patch set improves above call trace even after disk_util=0 is set.
Kashyap
>
> And any opinion whether we would want this as a fix? Information requested
> above would help explain why we would need it as a fix.
>
> Cheers,
> John
Download attachment "smime.p7s" of type "application/pkcs7-signature" (4212 bytes)
Powered by blists - more mailing lists