lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 17 Feb 2013 21:02:47 -0700
From:	Alex Williamson <alex.williamson@...hat.com>
To:	Li Zefan <lizefan@...wei.com>
Cc:	Marcelo Tosatti <mtosatti@...hat.com>,
	Gleb Natapov <gleb@...hat.com>, kvm@...r.kernel.org,
	Davide Libenzi <davidel@...ilserver.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Cgroups <cgroups@...r.kernel.org>,
	Gregory Haskins <ghaskins@...ell.com>,
	"Kirill A. Shutemov" <kirill@...temov.name>
Subject: Re: [RFC][PATCH] kvm: fix a race when closing irq eventfd

On Mon, 2013-02-18 at 11:13 +0800, Li Zefan wrote:
> While trying to fix a race when closing cgroup eventfd, I took a look
> at how kvm deals with this problem, and I found it doesn't.
> 
> I may be wrong, as I don't know kvm code, so correct me if I'm.
> 
> 	/*
> 	 * Race-free decouple logic (ordering is critical)
> 	 */
> 	static void
> 	irqfd_shutdown(struct work_struct *work)
> 
> I don't think it's race-free!
> 
> 	static int
> 	irqfd_wakeup(wait_queue_t *wait, unsigned mode, int sync, void *key)
> 	{
> 	...
> 			 * We cannot race against the irqfd going away since the
> 			 * other side is required to acquire wqh->lock, which we hold
> 			 */
> 			if (irqfd_is_active(irqfd))
> 				irqfd_deactivate(irqfd);
> 	}
> 
> In kvm_irqfd_deassign() and kvm_irqfd_release() where irqfds are freed,
> wqh->lock is not acquired!
> 
> So here is the race:
> 
> CPU0                                    CPU1
> -----------------------------------     ---------------------------------
> kvm_irqfd_release()
>   spin_lock(kvm->irqfds.lock);
>   ...
>   irqfd_deactivate(irqfd);
>     list_del_init(&irqfd->list);
>   spin_unlock(kvm->irqfd.lock);
>   ...
> 					close(eventfd)
> 					  irqfd_wakeup();

irqfd_wakeup is assumed to be called with wqh->lock held

>     irqfd_shutdown();

eventfd_ctx_remove_wait_queue has to acquire wqh->lock to complete or
else irqfd_shutdown never makes it to the kfree.  So in your scenario
this cpu0 spins here until cpu1 completes.

>       remove_waitqueue(irqfd->wait);
>       kfree(irqfd);
> 					    spin_lock(kvm->irqfd.lock);
> 					      if (!list_empty(&irqfd->list))

We don't take this branch because we already did list_del_init above,
which makes irqfd->list empty.

> 						irqfd_deactivate(irqfd);
> 						  list_del_init(&irqfd->list);
> 					    spin_unlock(kvm->irqfd.lock);
> 
> Look, we're accessing irqfd though it has already been freed!

Unless the irqfd_wakeup path isn't acquiring wqh->lock, it looks
race-free to me.  Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ