[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200309095245.GY24458@Air-de-Roger.citrite.net>
Date: Mon, 9 Mar 2020 10:54:07 +0100
From: Roger Pau Monné <roger.pau@...rix.com>
To: Anchal Agarwal <anchalag@...zon.com>
CC: <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
<hpa@...or.com>, <x86@...nel.org>, <boris.ostrovsky@...cle.com>,
<jgross@...e.com>, <linux-pm@...r.kernel.org>,
<linux-mm@...ck.org>, <kamatam@...zon.com>,
<sstabellini@...nel.org>, <konrad.wilk@...cle.com>,
<axboe@...nel.dk>, <davem@...emloft.net>, <rjw@...ysocki.net>,
<len.brown@...el.com>, <pavel@....cz>, <peterz@...radead.org>,
<eduval@...zon.com>, <sblbir@...zon.com>,
<xen-devel@...ts.xenproject.org>, <vkuznets@...hat.com>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<dwmw@...zon.co.uk>, <fllinden@...ozn.com>,
<benh@...nel.crashing.org>
Subject: Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend
and hibernation
On Fri, Mar 06, 2020 at 06:40:33PM +0000, Anchal Agarwal wrote:
> On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > blkfront_gather_backend_features(info);
> > > /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> > > blkif_set_queue_limits(info);
> > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> > > kick_pending_request_queues(rinfo);
> > > }
> > >
> > > + if (frozen)
> > > + return 0;
> >
> > I have to admit my memory is fuzzy here, but don't you need to
> > re-queue requests in case the backend has different limits of indirect
> > descriptors per request for example?
> >
> > Or do we expect that the frontend is always going to be resumed on the
> > same backend, and thus features won't change?
> >
> So to understand your question better here, AFAIU the maximum number of indirect
> grefs is fixed by the backend, but the frontend can issue requests with any
> number of indirect segments as long as it's less than the number provided by
> the backend. So by your question you mean this max number of MAX_INDIRECT_SEGMENTS
> 256 on backend can change ?
Yes, number of indirect descriptors supported by the backend can
change, because you moved to a different backend, or because the
maximum supported by the backend has changed. It's also possible to
resume on a backend that has no indirect descriptors support at all.
> > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
> > > mutex_unlock(&blkfront_mutex);
> > > }
> > >
> > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > +{
> > > + unsigned int i;
> > > + struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > + struct blkfront_ring_info *rinfo;
> > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > > + unsigned int timeout = 5 * HZ;
> > > + int err = 0;
> > > +
> > > + info->connected = BLKIF_STATE_FREEZING;
> > > +
> > > + blk_mq_freeze_queue(info->rq);
> > > + blk_mq_quiesce_queue(info->rq);
> >
> > Don't you need to also drain the queue and make sure it's empty?
> >
> blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of running HW queues synchronously
> and making sure all the ongoing dispatches have finished. Did I understand your question right?
Can you please add some check to that end? (ie: that there are no
pending requests on any queue?)
Thanks, Roger.
Powered by blists - more mailing lists