[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220329100859-mutt-send-email-mst@kernel.org>
Date: Tue, 29 Mar 2022 10:37:53 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Jason Wang <jasowang@...hat.com>,
virtualization <virtualization@...ts.linux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Marc Zyngier <maz@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Stefano Garzarella <sgarzare@...hat.com>,
Keir Fraser <keirf@...gle.com>,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re:
On Tue, Mar 29, 2022 at 10:35:21AM +0200, Thomas Gleixner wrote:
> On Mon, Mar 28 2022 at 06:40, Michael S. Tsirkin wrote:
> > On Mon, Mar 28, 2022 at 02:18:22PM +0800, Jason Wang wrote:
> >> > > So I think we might talk different issues:
> >> > >
> >> > > 1) Whether request_irq() commits the previous setups, I think the
> >> > > answer is yes, since the spin_unlock of desc->lock (release) can
> >> > > guarantee this though there seems no documentation around
> >> > > request_irq() to say this.
> >> > >
> >> > > And I can see at least drivers/video/fbdev/omap2/omapfb/dss/dispc.c is
> >> > > using smp_wmb() before the request_irq().
>
> That's a complete bogus example especially as there is not a single
> smp_rmb() which pairs with the smp_wmb().
>
> >> > > And even if write is ordered we still need read to be ordered to be
> >> > > paired with that.
> >
> > IMO it synchronizes with the CPU to which irq is
> > delivered. Otherwise basically all drivers would be broken,
> > wouldn't they be?
> > I don't know whether it's correct on all platforms, but if not
> > we need to fix request_irq.
>
> There is nothing to fix:
>
> request_irq()
> raw_spin_lock_irq(desc->lock); // ACQUIRE
> ....
> raw_spin_unlock_irq(desc->lock); // RELEASE
>
> interrupt()
> raw_spin_lock(desc->lock); // ACQUIRE
> set status to IN_PROGRESS
> raw_spin_unlock(desc->lock); // RELEASE
> invoke handler()
>
> So anything which the driver set up _before_ request_irq() is visible to
> the interrupt handler. No?
> >> What happens if an interrupt is raised in the middle like:
> >>
> >> smp_store_release(dev->irq_soft_enabled, true)
> >> IRQ handler
> >> synchornize_irq()
>
> This is bogus. The obvious order of things is:
>
> dev->ok = false;
> request_irq();
>
> moar_setup();
> synchronize_irq(); // ACQUIRE + RELEASE
> dev->ok = true;
>
> The reverse operation on teardown:
>
> dev->ok = false;
> synchronize_irq(); // ACQUIRE + RELEASE
>
> teardown();
>
> So in both cases a simple check in the handler is sufficient:
>
> handler()
> if (!dev->ok)
> return;
Thanks a lot for the analysis Thomas. This is more or less what I was
thinking.
>
> I'm not understanding what you folks are trying to "fix" here.
We are trying to fix the driver since at the moment it does not
have the dev->ok flag at all.
And I suspect virtio is not alone in that.
So it would have been nice if there was a standard flag
replacing the driver-specific dev->ok above, and ideally
would also handle the case of an interrupt triggering
too early by deferring the interrupt until the flag is set.
And in fact, it does kind of exist: IRQF_NO_AUTOEN, and you would call
enable_irq instead of dev->ok = true, except
- it doesn't work with affinity managed IRQs
- it does not work with shared IRQs
So using dev->ok as you propose above seems better at this point.
> If any
> driver does this in the wrong order, then the driver is broken.
I agree, however:
$ git grep synchronize_irq `git grep -l request_irq drivers/net/`|wc -l
113
$ git grep -l request_irq drivers/net/|wc -l
397
I suspect there are more drivers which in theory need the
synchronize_irq dance but in practice do not execute it.
> Sure, you can do the same with:
>
> dev->ok = false;
> request_irq();
> moar_setup();
> smp_wmb();
> dev->ok = true;
>
> for the price of a smp_rmb() in the interrupt handler:
>
> handler()
> if (!dev->ok)
> return;
> smp_rmb();
>
> but that's only working for the setup case correctly and not for
> teardown.
>
> Thanks,
>
> tglx
Powered by blists - more mailing lists