[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM6PR04MB657564DA7CCE9220453DD6F8FCDA9@DM6PR04MB6575.namprd04.prod.outlook.com>
Date: Tue, 14 Sep 2021 06:39:00 +0000
From: Avri Altman <Avri.Altman@....com>
To: Kiwoong Kim <kwmad.kim@...sung.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alim.akhtar@...sung.com" <alim.akhtar@...sung.com>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"beanhuo@...ron.com" <beanhuo@...ron.com>,
"cang@...eaurora.org" <cang@...eaurora.org>,
"adrian.hunter@...el.com" <adrian.hunter@...el.com>,
"sc.suh@...sung.com" <sc.suh@...sung.com>,
"hy50.seo@...sung.com" <hy50.seo@...sung.com>,
"sh425.lee@...sung.com" <sh425.lee@...sung.com>,
"bhoon95.kim@...sung.com" <bhoon95.kim@...sung.com>
Subject: RE: Question about ufs_bsg
Hi,
> Hi,
>
> ufs_bsg was introduced nearly three years ago and it allocates its own request
> queue.
> I faced a sytmpom with this and want to ask something about it.
>
> That is, sometimes queue depth for ufs is limited to half of the its maximum
> value
> even in a situation with many IO requests from filesystem.
This is interesting indeed. Before going further with investigating this,
Could you share some more details on your setup:
The bsg node it creates was originally meant to convey a single query request via SG_IO ioctl,
Which is blocking.
- How do you create many IO requests queueing on that request queue?
- command upiu is not implemented, are all those IOs are query requests?
Thanks,
Avri
> It turned out that it only occurs when a query is being processed at the same
> time.
> Regarding my tracing, when the query process starts, users for the hctx that
> represents
> a ufs host increase to two and with this, some pathes calling 'hctx_may_queue'
> function in blk-mq seems to throttle dispatches, technically with 16 because the
> number of
> ufs slots (32 in my case) is dividend by two (users).
>
> I found that it happened when a query for write booster is processed
> because write booster only turns on in some conditions in my base that is
> different
> from kernel mainline. But when an exceptional event or others that could lead
> to a query occurs,
> it can happen even in mainline.
>
> I think the throttling is a little bit excessive,
> so the question: is there any way to assign queue depth per user on an
> asymmetric basis?
>
> Thanks.
> Kiwoong Kim
>
Powered by blists - more mailing lists