[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <b52e3aac-208c-8d55-cc77-2534a375fda5@linux.vnet.ibm.com>
Date: Thu, 19 Apr 2018 16:14:25 +0200
From: Pierre Morel <pmorel@...ux.vnet.ibm.com>
To: Cornelia Huck <cohuck@...hat.com>,
Dong Jia Shi <bjsdjshi@...ux.ibm.com>,
Halil Pasic <pasic@...ux.ibm.com>
Cc: linux-s390@...r.kernel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] vfio-ccw: process ssch with interrupts disabled
On 13/04/2018 16:05, Cornelia Huck wrote:
> When we call ssch, an interrupt might already be pending once we
> return from the START SUBCHANNEL instruction. Therefore we need to
> make sure interrupts are disabled until after we're done with our
> processing.
>
> Note that the subchannel lock is the same as the ccwdevice lock that
> is mentioned in the documentation for ccw_device_start() and friends.
>
> Signed-off-by: Cornelia Huck <cohuck@...hat.com>
> ---
> drivers/s390/cio/vfio_ccw_fsm.c | 19 ++++++++++++-------
> 1 file changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
> index ff6963ad6e39..3c800642134e 100644
> --- a/drivers/s390/cio/vfio_ccw_fsm.c
> +++ b/drivers/s390/cio/vfio_ccw_fsm.c
> @@ -20,12 +20,12 @@ static int fsm_io_helper(struct vfio_ccw_private *private)
> int ccode;
> __u8 lpm;
> unsigned long flags;
> + int ret;
>
> sch = private->sch;
>
> spin_lock_irqsave(sch->lock, flags);
> private->state = VFIO_CCW_STATE_BUSY;
> - spin_unlock_irqrestore(sch->lock, flags);
>
> orb = cp_get_orb(&private->cp, (u32)(addr_t)sch, sch->lpm);
>
> @@ -38,10 +38,12 @@ static int fsm_io_helper(struct vfio_ccw_private *private)
> * Initialize device status information
> */
> sch->schib.scsw.cmd.actl |= SCSW_ACTL_START_PEND;
> - return 0;
> + ret = 0;
> + break;
> case 1: /* Status pending */
> case 2: /* Busy */
> - return -EBUSY;
> + ret = -EBUSY;
> + break;
> case 3: /* Device/path not operational */
> {
> lpm = orb->cmd.lpm;
> @@ -51,13 +53,16 @@ static int fsm_io_helper(struct vfio_ccw_private *private)
> sch->lpm = 0;
>
> if (cio_update_schib(sch))
> - return -ENODEV;
> -
> - return sch->lpm ? -EACCES : -ENODEV;
> + ret = -ENODEV;
> + else
> + ret = sch->lpm ? -EACCES : -ENODEV;
> + break;
> }
> default:
> - return ccode;
> + ret = ccode;
> }
> + spin_unlock_irqrestore(sch->lock, flags);
> + return ret;
> }
>
> static void fsm_notoper(struct vfio_ccw_private *private,
I have been working on a patch to solve this problem between others, I
provide it soon.
It is much more intrusive, reworking interrupts and state machine.
So may be you do not like it.
If we stay on this patch, even this is quite a long spinlock around ssch
and stsch,
and we need it in the current implementation.
Acked-by: Pierre Morel<pmorel@...ux.vnet.ibm.com>
--
Pierre Morel
Linux/KVM/QEMU in Böblingen - Germany
Powered by blists - more mailing lists