[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <27240C0AC20F114CBF8149A2696CBE4A01C28A3B@SHSMSX101.ccr.corp.intel.com>
Date: Fri, 21 Feb 2014 12:29:44 +0000
From: "Liu, Chuansheng" <chuansheng.liu@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Wang, Xiaoming" <xiaoming.wang@...el.com>
Subject: RE: [PATCH 1/2] genirq: Fix the possible synchronize_irq()
wait-forever
Hello Thomas,
> -----Original Message-----
> From: Thomas Gleixner [mailto:tglx@...utronix.de]
> Sent: Friday, February 21, 2014 7:53 PM
> To: Liu, Chuansheng
> Cc: linux-kernel@...r.kernel.org; Wang, Xiaoming
> Subject: RE: [PATCH 1/2] genirq: Fix the possible synchronize_irq() wait-forever
>
> On Fri, 21 Feb 2014, Liu, Chuansheng wrote:
> > > > > I think you have a point there, but not on x86 wherre the atomic_dec
> > > > > and the spinlock on the queueing side are full barriers. For non-x86
> > > > > there is definitely a potential issue.
> > > > >
> > > > But even on X86, spin_unlock has no full barrier, the following scenario:
> > > > CPU0 CPU1
> > > > spin_lock
> > > > atomic_dec_and_test
> > > > insert into queue
> > > > spin_unlock
> > > > checking waitqueue_active
> > >
> > > But CPU0 sees the 0, right?
> > Not be clear here:)
> > The atomic_read has no barrier.
> >
> > Found commit 6cb2a21049b89 has one similar smp_mb() calling before
> > waitqueue_active() on one X86 CPU.
>
> Indeed, you are completely right. Great detective work!
Thanks your encouraging.
>
> I'm inclined to remove the waitqueue_active() alltogether. It's
> creating more headache than it's worth.
If I am understanding well, removing the checking of waitqueue_active(),
and call wakeup() directly which will check list with spinlock protection.
If so, I can prepare one patch for it:)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists