[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJedcCwQ6rbgd0sAye5owMPmcd1bd4ZagWnG0JigE+y42_zGEg@mail.gmail.com>
Date: Fri, 21 Apr 2023 10:45:12 +0800
From: Zheng Hacker <hackerzheng666@...il.com>
To: Manish Rangankar <mrangankar@...vell.com>
Cc: Zheng Wang <zyytlz.wz@....com>,
Nilesh Javali <njavali@...vell.com>,
GR-QLogic-Storage-Upstream <GR-QLogic-Storage-Upstream@...vell.com>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"1395428693sheep@...il.com" <1395428693sheep@...il.com>,
"alex000young@...il.com" <alex000young@...il.com>
Subject: Re: [EXT] [PATCH v2] scsi: qedi: Fix use after free bug in
qedi_remove due to race condition
Manish Rangankar <mrangankar@...vell.com> 于2023年4月20日周四 13:49写道:
>
>
>
> > -----Original Message-----
> > From: Zheng Wang <zyytlz.wz@....com>
> > Sent: Thursday, April 13, 2023 9:04 AM
> > To: Nilesh Javali <njavali@...vell.com>
> > Cc: Manish Rangankar <mrangankar@...vell.com>; GR-QLogic-Storage-
> > Upstream <GR-QLogic-Storage-Upstream@...vell.com>;
> > jejb@...ux.ibm.com; martin.petersen@...cle.com; linux-
> > scsi@...r.kernel.org; linux-kernel@...r.kernel.org;
> > hackerzheng666@...il.com; 1395428693sheep@...il.com;
> > alex000young@...il.com; Zheng Wang <zyytlz.wz@....com>
> > Subject: [EXT] [PATCH v2] scsi: qedi: Fix use after free bug in qedi_remove
> > due to race condition
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > In qedi_probe, it calls __qedi_probe, which bound &qedi->recovery_work
> > with qedi_recovery_handler and bound &qedi->board_disable_work with
> > qedi_board_disable_work.
> >
> > When it calls qedi_schedule_recovery_handler, it will finally call
> > schedule_delayed_work to start the work.
> >
> > When we call qedi_remove to remove the driver, there may be a sequence
> > as follows:
> >
> > Fix it by finishing the work before cleanup in qedi_remove.
> >
> > CPU0 CPU1
> >
> > |qedi_recovery_handler
> > qedi_remove |
> > __qedi_remove |
> > iscsi_host_free |
> > scsi_host_put |
> > //free shost |
> > |iscsi_host_for_each_session
> > |//use qedi->shost
> >
> > Fixes: 4b1068f5d74b ("scsi: qedi: Add MFW error recovery process")
> > Signed-off-by: Zheng Wang <zyytlz.wz@....com>
> > ---
> > v2:
> > - remove unnecessary comment suggested by Mike Christie and cancel the
> > work after qedi_ops->stop and qedi_ops->ll2->stop which ensure there is no
> > more work suggested by Manish Rangankar
> > ---
> > drivers/scsi/qedi/qedi_main.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
> > index f2ee49756df8..45d359554182 100644
> > --- a/drivers/scsi/qedi/qedi_main.c
> > +++ b/drivers/scsi/qedi/qedi_main.c
> > @@ -2450,6 +2450,9 @@ static void __qedi_remove(struct pci_dev *pdev,
> > int mode)
> > qedi_ops->ll2->stop(qedi->cdev);
> > }
> >
> > + cancel_delayed_work_sync(&qedi->recovery_work);
> > + cancel_delayed_work_sync(&qedi->board_disable_work);
> > +
> > qedi_free_iscsi_pf_param(qedi);
> >
> > rval = qedi_ops->common->update_drv_state(qedi->cdev, false);
> > --
> > 2.25.1
>
> Thanks,
>
> Acked-by: Manish Rangankar <mrangankar@...vell.com>
>
Thanks for your review.
Best regards,
Zheng
Powered by blists - more mailing lists