[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <HK0P153MB0273B954294B331E20AACB41BFD20@HK0P153MB0273.APCP153.PROD.OUTLOOK.COM>
Date: Wed, 22 Apr 2020 01:48:25 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Ming Lei <ming.lei@...hat.com>
CC: "jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hch@....de" <hch@....de>,
"bvanassche@....org" <bvanassche@....org>,
"hare@...e.de" <hare@...e.de>,
Michael Kelley <mikelley@...rosoft.com>,
Long Li <longli@...rosoft.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>
Subject: RE: [PATCH] scsi: storvsc: Fix a panic in the hibernation procedure
> From: Ming Lei <ming.lei@...hat.com>
> Sent: Tuesday, April 21, 2020 6:28 PM
> To: Dexuan Cui <decui@...rosoft.com>
>
> On Tue, Apr 21, 2020 at 05:17:24PM -0700, Dexuan Cui wrote:
> > During hibernation, the sdevs are suspended automatically in
> > drivers/scsi/scsi_pm.c before storvsc_suspend(), so after
> > storvsc_suspend(), there is no disk I/O from the file systems, but there
> > can still be disk I/O from the kernel space, e.g. disk_check_events() ->
> > sr_block_check_events() -> cdrom_check_events() can still submit I/O
> > to the storvsc driver, which causes a paic of NULL pointer dereference,
> > since storvsc has closed the vmbus channel in storvsc_suspend(): refer
> > to the below links for more info:
> >
> > Fix the panic by blocking/unblocking all the I/O queues properly.
> >
> > Note: this patch depends on another patch "scsi: core: Allow the state
> > change from SDEV_QUIESCE to SDEV_BLOCK" (refer to the second link
> above).
> >
> > Fixes: 56fb10585934 ("scsi: storvsc: Add the support of hibernation")
> > Signed-off-by: Dexuan Cui <decui@...rosoft.com>
> > ---
> > drivers/scsi/storvsc_drv.c | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
> > index fb41636519ee..fd51d2f03778 100644
> > --- a/drivers/scsi/storvsc_drv.c
> > +++ b/drivers/scsi/storvsc_drv.c
> > @@ -1948,6 +1948,11 @@ static int storvsc_suspend(struct hv_device
> *hv_dev)
> > struct storvsc_device *stor_device = hv_get_drvdata(hv_dev);
> > struct Scsi_Host *host = stor_device->host;
> > struct hv_host_device *host_dev = shost_priv(host);
> > + int ret;
> > +
> > + ret = scsi_host_block(host);
> > + if (ret)
> > + return ret;
> >
> > storvsc_wait_to_drain(stor_device);
> >
> > @@ -1968,10 +1973,15 @@ static int storvsc_suspend(struct hv_device
> *hv_dev)
> >
> > static int storvsc_resume(struct hv_device *hv_dev)
> > {
> > + struct storvsc_device *stor_device = hv_get_drvdata(hv_dev);
> > + struct Scsi_Host *host = stor_device->host;
> > int ret;
> >
> > ret = storvsc_connect_to_vsp(hv_dev, storvsc_ringbuffer_size,
> > hv_dev_is_fc(hv_dev));
> > + if (!ret)
> > + ret = scsi_host_unblock(host, SDEV_RUNNING);
> > +
> > return ret;
> > }
>
> scsi_host_block() is actually too heavy for just avoiding
> scsi internal command, which can be done simply by one atomic
> variable.
>
> Not mention scsi_host_block() is implemented too clumsy because
> nr_luns * synchronize_rcu() are required in scsi_host_block(),
> which should have been optimized to just one.
>
> Also scsi_device_quiesce() is heavy too, still takes 2
> synchronize_rcu() for one LUN.
>
> That is said SCSI suspend may take (3 * nr_luns) sysnchronize_rcu() in
> case that the HBA's suspend handler needs scsi_host_block().
>
> Thanks,
> Ming
When we're in storvsc_suspend(), all the userspace processes have been
frozen and all the file systems have been flushed, and there should not
be too much I/O from the kernel space, so IMO scsi_host_block() should be
pretty fast here.
Of cource, any performance improvement to the scsi_host_block() API is
still great. :-)
Thanks,
-- Dexuan
Powered by blists - more mailing lists