lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100801084136.GC16158@redhat.com>
Date:	Sun, 1 Aug 2010 11:41:36 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Sridhar Samudrala <samudrala@...ibm.com>,
	Jeff Dike <jdike@...ux.intel.com>,
	Juan Quintela <quintela@...hat.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp>,
	David Stevens <dlstevens@...ibm.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	kvm@...r.kernel.org, virtualization@...ts.osdl.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] vhost: locking/rcu cleanup

On Fri, Jul 30, 2010 at 04:49:54PM +0200, Tejun Heo wrote:
> Hello,
> 
> On 07/29/2010 02:23 PM, Michael S. Tsirkin wrote:
> > I saw WARN_ON(!list_empty(&dev->work_list)) trigger
> > so our custom flush is not as airtight as need be.
> 
> Could be but it's also possible that something has queued something
> after the last flush?
> Is the problem reproducible?

Well, We do requeue from the job itself. So need to be careful with what
we do with indexes here. Bug seemed to happen all the time when qemu was
killed under stress but now I can't reproduce anymore :(
Will try again later.

> > This patch switches to a simple atomic counter + srcu instead of
> > the custom locked queue + flush implementation.
> > 
> > This will slow down the setup ioctls, which should not matter -
> > it's slow path anyway. We use the expedited flush to at least
> > make sure it has a sane time bound.
> > 
> > Works fine for me. I got reports that with many guests,
> > work lock is highly contended, and this patch should in theory
> > fix this as well - but I haven't tested this yet.
> 
> Hmmm... vhost_poll_flush() becomes synchronize_srcu_expedited().  Can
> you please explain how it works?  synchronize_srcu_expedited() is an
> extremely heavy operation involving scheduling the cpu_stop task on
> all cpus.  I'm not quite sure whether doing it from every flush is a
> good idea.  Is flush supposed to be a very rare operation?

It is rare - happens on guest reboot typically. I guess I will
switch to regular synchronize_srcu.

> Having custom implementation is fine too but let's try to implement
> something generic if at all possible.
> 
> Thanks.

Sure. It does seem that avoiding list lock would be pretty hard
in generic code though.

> -- 
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ