lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Feb 2020 08:54:36 +0000
From:   "Durrant, Paul" <pdurrant@...zon.co.uk>
To:     Roger Pau Monné <roger.pau@...rix.com>,
        "Agarwal, Anchal" <anchalag@...zon.com>
CC:     "Valentin, Eduardo" <eduval@...zon.com>,
        "len.brown@...el.com" <len.brown@...el.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "benh@...nel.crashing.org" <benh@...nel.crashing.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "pavel@....cz" <pavel@....cz>, "hpa@...or.com" <hpa@...or.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "sstabellini@...nel.org" <sstabellini@...nel.org>,
        "fllinden@...ozn.com" <fllinden@...ozn.com>,
        "Kamata, Munehisa" <kamatam@...zon.com>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
        "Singh, Balbir" <sblbir@...zon.com>,
        "axboe@...nel.dk" <axboe@...nel.dk>,
        "konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
        "jgross@...e.com" <jgross@...e.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
        "rjw@...ysocki.net" <rjw@...ysocki.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "vkuznets@...hat.com" <vkuznets@...hat.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "Woodhouse, David" <dwmw@...zon.co.uk>
Subject: RE: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for
 PM suspend and hibernation

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@...ts.xenproject.org> On Behalf Of
> Roger Pau Monné
> Sent: 20 February 2020 08:39
> To: Agarwal, Anchal <anchalag@...zon.com>
> Cc: Valentin, Eduardo <eduval@...zon.com>; len.brown@...el.com;
> peterz@...radead.org; benh@...nel.crashing.org; x86@...nel.org; linux-
> mm@...ck.org; pavel@....cz; hpa@...or.com; tglx@...utronix.de;
> sstabellini@...nel.org; fllinden@...ozn.com; Kamata, Munehisa
> <kamatam@...zon.com>; mingo@...hat.com; xen-devel@...ts.xenproject.org;
> Singh, Balbir <sblbir@...zon.com>; axboe@...nel.dk;
> konrad.wilk@...cle.com; bp@...en8.de; boris.ostrovsky@...cle.com;
> jgross@...e.com; netdev@...r.kernel.org; linux-pm@...r.kernel.org;
> rjw@...ysocki.net; linux-kernel@...r.kernel.org; vkuznets@...hat.com;
> davem@...emloft.net; Woodhouse, David <dwmw@...zon.co.uk>
> Subject: Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks
> for PM suspend and hibernation
> 
> Thanks for this work, please see below.
> 
> On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
> > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
> > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
> > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > > > > > From: Munehisa Kamata <kamatam@...zon.com
> > > > > >
> > > > > > Add freeze, thaw and restore callbacks for PM suspend and
> hibernation
> > > > > > support. All frontend drivers that needs to use
> PM_HIBERNATION/PM_SUSPEND
> > > > > > events, need to implement these xenbus_driver callbacks.
> > > > > > The freeze handler stops a block-layer queue and disconnect the
> > > > > > frontend from the backend while freeing ring_info and associated
> resources.
> > > > > > The restore handler re-allocates ring_info and re-connect to the
> > > > > > backend, so the rest of the kernel can continue to use the block
> device
> > > > > > transparently. Also, the handlers are used for both PM suspend
> and
> > > > > > hibernation so that we can keep the existing suspend/resume
> callbacks for
> > > > > > Xen suspend without modification. Before disconnecting from
> backend,
> > > > > > we need to prevent any new IO from being queued and wait for
> existing
> > > > > > IO to complete.
> > > > >
> > > > > This is different from Xen (xenstore) initiated suspension, as in
> that
> > > > > case Linux doesn't flush the rings or disconnects from the
> backend.
> > > > Yes, AFAIK in xen initiated suspension backend takes care of it.
> > >
> > > No, in Xen initiated suspension backend doesn't take care of flushing
> > > the rings, the frontend has a shadow copy of the ring contents and it
> > > re-issues the requests on resume.
> > >
> > Yes, I meant suspension in general where both xenstore and backend knows
> > system is going under suspension and not flushing of rings.
> 
> backend has no idea the guest is going to be suspended. Backend code
> is completely agnostic to suspension/resume.
> 
> > That happens
> > in frontend when backend indicates that state is closing and so on.
> > I may have written it in wrong context.
> 
> I'm afraid I'm not sure I fully understand this last sentence.
> 
> > > > > > +static int blkfront_freeze(struct xenbus_device *dev)
> > > > > > +{
> > > > > > +	unsigned int i;
> > > > > > +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > > > > > +	struct blkfront_ring_info *rinfo;
> > > > > > +	/* This would be reasonable timeout as used in
> xenbus_dev_shutdown() */
> > > > > > +	unsigned int timeout = 5 * HZ;
> > > > > > +	int err = 0;
> > > > > > +
> > > > > > +	info->connected = BLKIF_STATE_FREEZING;
> > > > > > +
> > > > > > +	blk_mq_freeze_queue(info->rq);
> > > > > > +	blk_mq_quiesce_queue(info->rq);
> > > > > > +
> > > > > > +	for (i = 0; i < info->nr_rings; i++) {
> > > > > > +		rinfo = &info->rinfo[i];
> > > > > > +
> > > > > > +		gnttab_cancel_free_callback(&rinfo->callback);
> > > > > > +		flush_work(&rinfo->work);
> > > > > > +	}
> > > > > > +
> > > > > > +	/* Kick the backend to disconnect */
> > > > > > +	xenbus_switch_state(dev, XenbusStateClosing);
> > > > >
> > > > > Are you sure this is safe?
> > > > >
> > > > In my testing running multiple fio jobs, other test scenarios
> running
> > > > a memory loader works fine. I did not came across a scenario that
> would
> > > > have failed resume due to blkfront issues unless you can sugest
> some?
> > >
> > > AFAICT you don't wait for the in-flight requests to be finished, and
> > > just rely on blkback to finish processing those. I'm not sure all
> > > blkback implementations out there can guarantee that.
> > >
> > > The approach used by Xen initiated suspension is to re-issue the
> > > in-flight requests when resuming. I have to admit I don't think this
> > > is the best approach, but I would like to keep both the Xen and the PM
> > > initiated suspension using the same logic, and hence I would request
> > > that you try to re-use the existing resume logic (blkfront_resume).
> > >
> > > > > I don't think you wait for all requests pending on the ring to be
> > > > > finished by the backend, and hence you might loose requests as the
> > > > > ones on the ring would not be re-issued by blkfront_restore
> AFAICT.
> > > > >
> > > > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of
> no used
> > > > request on the shared ring. Also, we I want to pause the queue and
> flush all
> > > > the pending requests in the shared ring before disconnecting from
> backend.
> > >
> > > Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
> > > finished. I guess it's fine then.
> > >
> > Ok.
> > > > Quiescing the queue seemed a better option here as we want to make
> sure ongoing
> > > > requests dispatches are totally drained.
> > > > I should accept that some of these notion is borrowed from how nvme
> freeze/unfreeze
> > > > is done although its not apple to apple comparison.
> > >
> > > That's fine, but I would still like to requests that you use the same
> > > logic (as much as possible) for both the Xen and the PM initiated
> > > suspension.
> > >
> > > So you either apply this freeze/unfreeze to the Xen suspension (and
> > > drop the re-issuing of requests on resume) or adapt the same approach
> > > as the Xen initiated suspension. Keeping two completely different
> > > approaches to suspension / resume on blkfront is not suitable long
> > > term.
> > >
> > I agree with you on overhaul of xen suspend/resume wrt blkfront is a
> good
> > idea however, IMO that is a work for future and this patch series should
> > not be blocked for it. What do you think?
> 
> It's not so much that I think an overhaul of suspend/resume in
> blkfront is needed, it's just that I don't want to have two completely
> different suspend/resume paths inside blkfront.
> 
> So from my PoV I think the right solution is to either use the same
> code (as much as possible) as it's currently used by Xen initiated
> suspend/resume, or to also switch Xen initiated suspension to use the
> newly introduced code.
> 
> Having two different approaches to suspend/resume in the same driver
> is a recipe for disaster IMO: it adds complexity by forcing developers
> to take into account two different suspend/resume approaches when
> there's no need for it.

I disagree. S3 or S4 suspend/resume (or perhaps we should call them power state transitions to avoid confusion) are quite different from Xen suspend/resume.
Power state transitions ought to be, and indeed are, visible to the software running inside the guest. Applications, as well as drivers, can receive notification and take whatever action they deem appropriate.
Xen suspend/resume OTOH is used when a guest is migrated and the code should go to all lengths possible to make any software running inside the guest (other than Xen specific enlightened code, such as PV drivers) completely unaware that anything has actually happened.
So, whilst it may be possible to use common routines to, for example, re-establish PV frontend/backend communication, PV frontend code should be acutely aware of the circumstances they are operating in. I can cite example code in the Windows PV driver, which have supported guest S3/S4 power state transitions since day 1.

  Paul

> 
> Thanks, Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@...ts.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ