[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1361161533.2801.24.camel@bling.home>
Date: Sun, 17 Feb 2013 21:25:33 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: Li Zefan <lizefan@...wei.com>
Cc: Marcelo Tosatti <mtosatti@...hat.com>,
Gleb Natapov <gleb@...hat.com>, kvm@...r.kernel.org,
Davide Libenzi <davidel@...ilserver.org>,
LKML <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>,
Gregory Haskins <ghaskins@...ell.com>,
"Kirill A. Shutemov" <kirill@...temov.name>
Subject: Re: [RFC][PATCH] kvm: fix a race when closing irq eventfd
On Mon, 2013-02-18 at 12:09 +0800, Li Zefan wrote:
> On 2013/2/18 12:02, Alex Williamson wrote:
> > On Mon, 2013-02-18 at 11:13 +0800, Li Zefan wrote:
> >> While trying to fix a race when closing cgroup eventfd, I took a look
> >> at how kvm deals with this problem, and I found it doesn't.
> >>
> >> I may be wrong, as I don't know kvm code, so correct me if I'm.
> >>
> >> /*
> >> * Race-free decouple logic (ordering is critical)
> >> */
> >> static void
> >> irqfd_shutdown(struct work_struct *work)
> >>
> >> I don't think it's race-free!
> >>
> >> static int
> >> irqfd_wakeup(wait_queue_t *wait, unsigned mode, int sync, void *key)
> >> {
> >> ...
> >> * We cannot race against the irqfd going away since the
> >> * other side is required to acquire wqh->lock, which we hold
> >> */
> >> if (irqfd_is_active(irqfd))
> >> irqfd_deactivate(irqfd);
> >> }
> >>
> >> In kvm_irqfd_deassign() and kvm_irqfd_release() where irqfds are freed,
> >> wqh->lock is not acquired!
> >>
> >> So here is the race:
> >>
> >> CPU0 CPU1
> >> ----------------------------------- ---------------------------------
> >> kvm_irqfd_release()
> >> spin_lock(kvm->irqfds.lock);
> >> ...
> >> irqfd_deactivate(irqfd);
> >> list_del_init(&irqfd->list);
> >> spin_unlock(kvm->irqfd.lock);
> >> ...
> >> close(eventfd)
> >> irqfd_wakeup();
> >
> > irqfd_wakeup is assumed to be called with wqh->lock held
> >
>
> I'm aware of this.
>
> As I said, kvm_irqfd_deassign() and kvm_irqfd_release() are not acquiring
> wqh->lock.
They do when they call eventfd_ctx_remove_wait_queue. The irqfd is
enabled until that point and the list_del_init prevents multiple paths
from calling irqfd_deactivate.
> >> irqfd_shutdown();
> >
> > eventfd_ctx_remove_wait_queue has to acquire wqh->lock to complete or
> > else irqfd_shutdown never makes it to the kfree. So in your scenario
> > this cpu0 spins here until cpu1 completes.
> >
> >> remove_waitqueue(irqfd->wait);
> >> kfree(irqfd);
> >> spin_lock(kvm->irqfd.lock);
> >> if (!list_empty(&irqfd->list))
> >
> > We don't take this branch because we already did list_del_init above,
> > which makes irqfd->list empty.
> >
>
> It doesn't matter if the list is empty or not.
Note that this is not kvm->irqfds.items, we're testing whether the
individual irqfd is detached from the list.
> The point is, irqfd has been kfreed, so the if statement is simply not safe!
It cannot be kfreed. As noted above the cpu0 path stops trying to
acquire wqh->lock which already owned by cpu1. The call to
eventfd_ctx_remove_wait_queue atomically removes the wait queue once the
wqh->lock is acquired, so after that point we're ok to kfree it.
Thanks,
Alex
> >> irqfd_deactivate(irqfd);
> >> list_del_init(&irqfd->list);
> >> spin_unlock(kvm->irqfd.lock);
> >>
> >> Look, we're accessing irqfd though it has already been freed!
> >
> > Unless the irqfd_wakeup path isn't acquiring wqh->lock, it looks
> > race-free to me. Thanks,
> >
> > Alex
> >
> > .
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists