[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <000401d7a934$149c9a80$3dd5cf80$@samsung.com>
Date: Tue, 14 Sep 2021 15:45:17 +0900
From: "Kiwoong Kim" <kwmad.kim@...sung.com>
To: "'Avri Altman'" <Avri.Altman@....com>,
<linux-scsi@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<alim.akhtar@...sung.com>, <jejb@...ux.ibm.com>,
<martin.petersen@...cle.com>, <beanhuo@...ron.com>,
<cang@...eaurora.org>, <adrian.hunter@...el.com>,
<sc.suh@...sung.com>, <hy50.seo@...sung.com>,
<sh425.lee@...sung.com>, <bhoon95.kim@...sung.com>
Subject: RE: Question about ufs_bsg
> Hi,
>
> > Hi,
> >
> > ufs_bsg was introduced nearly three years ago and it allocates its own
> > request queue.
> > I faced a sytmpom with this and want to ask something about it.
> >
> > That is, sometimes queue depth for ufs is limited to half of the its
> > maximum value even in a situation with many IO requests from
> > filesystem.
> This is interesting indeed. Before going further with investigating this,
Hi. What I first intended is not ufs_bsg but as you might already know, it also allocated its own request queue.
In that point, we can imagine it could be the same situation.
> Could you share some more details on your setup:
> The bsg node it creates was originally meant to convey a single query
> request via SG_IO ioctl, Which is blocking.
> - How do you create many IO requests queueing on that request queue?
I used some benchmarks, such tiobench or Androbench that could make heavy IO scenarios.
> - command upiu is not implemented, are all those IOs are query requests?
What I've seen is just one query and many scsi commands.
>
> > It turned out that it only occurs when a query is being processed at
> > the same time.
> > Regarding my tracing, when the query process starts, users for the
> > hctx that represents a ufs host increase to two and with this, some
> > pathes calling 'hctx_may_queue'
> > function in blk-mq seems to throttle dispatches, technically with 16
> > because the number of ufs slots (32 in my case) is dividend by two
> > (users).
> >
> > I found that it happened when a query for write booster is processed
> > because write booster only turns on in some conditions in my base that
> > is different from kernel mainline. But when an exceptional event or
> > others that could lead to a query occurs, it can happen even in
> > mainline.
> >
> > I think the throttling is a little bit excessive, so the question: is
> > there any way to assign queue depth per user on an asymmetric basis?
> >
> > Thanks.
> > Kiwoong Kim
> >
Powered by blists - more mailing lists