[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160727142420.esp7aharesu45ixr@mac>
Date: Wed, 27 Jul 2016 16:24:28 +0200
From: Roger Pau Monné <roger.pau@...rix.com>
To: Bob Liu <bob.liu@...cle.com>
CC: <linux-kernel@...r.kernel.org>, <xen-devel@...ts.xenproject.org>,
<konrad.wilk@...cle.com>
Subject: Re: [PATCH v3] xen-blkfront: dynamic configuration of per-vbd
resources
On Wed, Jul 27, 2016 at 07:21:05PM +0800, Bob Liu wrote:
>
> On 07/27/2016 06:59 PM, Roger Pau Monné wrote:
> > On Wed, Jul 27, 2016 at 11:21:25AM +0800, Bob Liu wrote:
> > [...]
> >> +static ssize_t dynamic_reconfig_device(struct blkfront_info *info, ssize_t count)
> >> +{
> >> + /*
> >> + * Prevent new requests even to software request queue.
> >> + */
> >> + blk_mq_freeze_queue(info->rq);
> >> +
> >> + /*
> >> + * Guarantee no uncompleted reqs.
> >> + */
> >
> > I'm also wondering, why do you need to guarantee that there are no
> > uncompleted requests? The resume procedure is going to call blkif_recover
> > that will take care of requeuing any unfinished requests that are on the
> > ring.
> >
>
> Because there may have requests in the software request queue with more segments than
> we can handle(if info->max_indirect_segments is reduced).
>
> The blkif_recover() can't handle this since blk-mq was introduced,
> because there is no way to iterate the sw-request queues(blk_fetch_request() can't be used by blk-mq).
>
> So there is a bug in blkif_recover(), I was thinking implement the suspend function of blkfront_driver like:
Hm, this is a regression and should be fixed ASAP. I'm still not sure I
follow, don't blk_queue_max_segments change the number of segments the
requests on the queue are going to have? So that you will only have to
re-queue the requests already on the ring?
Waiting for the whole queue to be flushed before suspending is IMHO not
acceptable, it introduces an unbounded delay during migration if the backend
is slow for some reason.
Roger.
Powered by blists - more mailing lists