[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <000001d7a92d$a0edcb00$e2c96100$@samsung.com>
Date: Tue, 14 Sep 2021 14:59:06 +0900
From: "Kiwoong Kim" <kwmad.kim@...sung.com>
To: <linux-scsi@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<alim.akhtar@...sung.com>, <avri.altman@....com>,
<jejb@...ux.ibm.com>, <martin.petersen@...cle.com>,
<beanhuo@...ron.com>, <cang@...eaurora.org>,
<adrian.hunter@...el.com>, <sc.suh@...sung.com>,
<hy50.seo@...sung.com>, <sh425.lee@...sung.com>,
<bhoon95.kim@...sung.com>
Subject: Question about ufs_bsg
Hi,
ufs_bsg was introduced nearly three years ago and it allocates its own request queue.
I faced a sytmpom with this and want to ask something about it.
That is, sometimes queue depth for ufs is limited to half of the its maximum value
even in a situation with many IO requests from filesystem.
It turned out that it only occurs when a query is being processed at the same time.
Regarding my tracing, when the query process starts, users for the hctx that represents
a ufs host increase to two and with this, some pathes calling 'hctx_may_queue'
function in blk-mq seems to throttle dispatches, technically with 16 because the number of
ufs slots (32 in my case) is dividend by two (users).
I found that it happened when a query for write booster is processed
because write booster only turns on in some conditions in my base that is different
from kernel mainline. But when an exceptional event or others that could lead to a query occurs,
it can happen even in mainline.
I think the throttling is a little bit excessive,
so the question: is there any way to assign queue depth per user on an asymmetric basis?
Thanks.
Kiwoong Kim
Powered by blists - more mailing lists