[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0906191525470.14884@makko.or.mcafeemobile.com>
Date: Fri, 19 Jun 2009 15:47:11 -0700 (PDT)
From: Davide Libenzi <davidel@...ilserver.org>
To: Gregory Haskins <ghaskins@...ell.com>
cc: mst@...hat.com, kvm@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
avi@...hat.com, paulmck@...ux.vnet.ibm.com,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 3/3] eventfd: add internal reference counting to fix
notifier race conditions
On Fri, 19 Jun 2009, Davide Libenzi wrote:
> On Fri, 19 Jun 2009, Gregory Haskins wrote:
>
> > I am fairly confident it is not that simple after having thought about
> > this issue over the last few days. But I've been wrong in the past.
> > Propose a patch and I will review it for races/correctness, if you
> > like. Perhaps a combination of that plus your asymmetrical locking
> > scheme would work. One of the challenges you will hit is avoiding ABBA
> > between your "get" lock and the wqh, but good luck!
>
> A patch for what? The eventfd patch is a one-liner.
> It seems hard to believe that the thing cannot be handled on your side.
> Once the wake_up_locked() is turned into a wake_up(), what other races are
> there?
AFAICS, the IRQfd code simply registers the callback to ->poll() and waits
for two events.
In the POLLIN event, you schedule_work(&irqfd->inject) and there are no
races there AFAICS (you basically do not care of anything eventfd memory
related at all).
For POLLHUP, you do:
spin_lock(irqfd->slock);
if (irqfd->wqh)
schedule_work(&irqfd->inject);
irqfd->wqh = NULL;
spin_unlock(irqfd->slock);
In your work function you notice the POLLHUP condition and take proper
action (dunno what it is in your case).
In your kvm_irqfd_release() function:
spin_lock(irqfd->slock);
if (irqfd->wqh)
remove_wait_queue(irqfd->wqh, &irqfd->wait);
irqfd->wqh = NULL;
spin_unlock(irqfd->slock);
Any races in there?
- Davide
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists