[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180802103916.GB6520@ming.t460p>
Date: Thu, 2 Aug 2018 18:39:17 +0800
From: Ming Lei <ming.lei@...hat.com>
To: "jianchao.wang" <jianchao.w.wang@...cle.com>
Cc: axboe@...nel.dk, bart.vanassche@....com,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC] blk-mq: clean up the hctx restart
On Wed, Aug 01, 2018 at 09:37:08PM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 08/01/2018 04:58 PM, Ming Lei wrote:
> > On Wed, Aug 01, 2018 at 10:17:30AM +0800, jianchao.wang wrote:
> >> Hi Ming
> >>
> >> Thanks for your kindly response.
> >>
> >> On 07/31/2018 02:16 PM, Ming Lei wrote:
> >>> On Tue, Jul 31, 2018 at 01:19:42PM +0800, jianchao.wang wrote:
> >>>> Hi Ming
> >>>>
> >>>> On 07/31/2018 12:58 PM, Ming Lei wrote:
> >>>>> On Tue, Jul 31, 2018 at 12:02:15PM +0800, Jianchao Wang wrote:
> >>>>>> Currently, we will always set SCHED_RESTART whenever there are
> >>>>>> requests in hctx->dispatch, then when request is completed and
> >>>>>> freed the hctx queues will be restarted to avoid IO hang. This
> >>>>>> is unnecessary most of time. Especially when there are lots of
> >>>>>> LUNs attached to one host, the RR restart loop could be very
> >>>>>> expensive.
> >>>>>
> >>>>> The big RR restart loop has been killed in the following commit:
> >>>>>
> >>>>> commit 97889f9ac24f8d2fc8e703ea7f80c162bab10d4d
> >>>>> Author: Ming Lei <ming.lei@...hat.com>
> >>>>> Date: Mon Jun 25 19:31:48 2018 +0800
> >>>>>
> >>>>> blk-mq: remove synchronize_rcu() from blk_mq_del_queue_tag_set()
> >>>>>
> >>>>>
> >>>>
> >>>> Oh, sorry, I didn't look into this patch due to its title when iterated the mail list,
> >>>> therefore I didn't realize the RR restart loop has already been killed. :)
> >>>>
> >>>> The RR restart loop could ensure the fairness of sharing some LLDD resource,
> >>>> not just avoid IO hung. Is it OK to kill it totally ?
> >>>
> >>> Yeah, it is, also the fairness might be improved a bit by the way in
> >>> commit 97889f9ac24f8d2fc, especially inside driver tag allocation
> >>> algorithem.
> >>>
> >>
> >> Would you mind to detail more here ?
> >>
> >> Regarding the driver tag case:
> >> For example:
> >>
> >> q_a q_b q_c q_d
> >> hctx0 hctx0 hctx0 hctx0
> >>
> >> tags
> >>
> >> Total number of tags is 32
> >> All of these 4 q are active.
> >>
> >> So every q has 8 tags.
> >>
> >> If all of these 4 q have used up their 8 tags, they have to wait.
> >>
> >> When part of the in-flight requests q_a are completed, tags are freed.
> >> but the __sbq_wake_up doesn't wake up the q_a, it may wake up q_b.
> >
> > 1) in case of IO scheduler
> > q_a should be waken up because q_a->hctx0 is added to one wq of the tags if
> > no tag is available, see blk_mq_mark_tag_wait().
> >
> > 2) in case of none scheduler
> > q_a should be waken up too, see blk_mq_get_tag().
> >
> > So I don't understand why you mentioned that q_a can't be waken up.
>
> There are multiple sbq_wait_states in one sbitmap_queue and __sbq_wake_up
> will only wake up the waiters on one of them one time. Please refer to __sbq_wake_up.
Yes, the multiple wqs are waken up in RR style, which is still fair generally speaking.
And there is no such issue of always not waking up 'q_a' when request is completed on
this queue, is there?
Thanks,
Ming
Powered by blists - more mailing lists