lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Dec 2010 15:56:56 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
	rusty@...tcorp.com.au, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] vhost test module

On Fri, Dec 03, 2010 at 01:18:18AM +0200, Michael S. Tsirkin wrote:
> On Thu, Dec 02, 2010 at 03:13:03PM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 02, 2010 at 09:47:09PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Dec 02, 2010 at 11:26:16AM -0800, Paul E. McKenney wrote:
> > > > On Thu, Dec 02, 2010 at 09:11:30PM +0200, Michael S. Tsirkin wrote:
> > > > > On Thu, Dec 02, 2010 at 11:00:37AM -0800, Paul E. McKenney wrote:
> > > > > > On Mon, Nov 29, 2010 at 07:09:01PM +0200, Michael S. Tsirkin wrote:
> > > > > > > This adds a test module for vhost infrastructure.
> > > > > > > Intentionally not tied to kbuild to prevent people
> > > > > > > from installing and loading it accidentally.
> > > > > > > 
> > > > > > > Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> > > > > > 
> > > > > > On question below.
> > > > > > 
> > > > > > > ---
> > > > > > > 
> > > > > > > diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> > > > > > > new file mode 100644
> > > > > > > index 0000000..099f302
> > > > > > > --- /dev/null
> > > > > > > +++ b/drivers/vhost/test.c
> > > > > > > @@ -0,0 +1,320 @@
> > > > > > > +/* Copyright (C) 2009 Red Hat, Inc.
> > > > > > > + * Author: Michael S. Tsirkin <mst@...hat.com>
> > > > > > > + *
> > > > > > > + * This work is licensed under the terms of the GNU GPL, version 2.
> > > > > > > + *
> > > > > > > + * test virtio server in host kernel.
> > > > > > > + */
> > > > > > > +
> > > > > > > +#include <linux/compat.h>
> > > > > > > +#include <linux/eventfd.h>
> > > > > > > +#include <linux/vhost.h>
> > > > > > > +#include <linux/miscdevice.h>
> > > > > > > +#include <linux/module.h>
> > > > > > > +#include <linux/mutex.h>
> > > > > > > +#include <linux/workqueue.h>
> > > > > > > +#include <linux/rcupdate.h>
> > > > > > > +#include <linux/file.h>
> > > > > > > +#include <linux/slab.h>
> > > > > > > +
> > > > > > > +#include "test.h"
> > > > > > > +#include "vhost.c"
> > > > > > > +
> > > > > > > +/* Max number of bytes transferred before requeueing the job.
> > > > > > > + * Using this limit prevents one virtqueue from starving others. */
> > > > > > > +#define VHOST_TEST_WEIGHT 0x80000
> > > > > > > +
> > > > > > > +enum {
> > > > > > > +	VHOST_TEST_VQ = 0,
> > > > > > > +	VHOST_TEST_VQ_MAX = 1,
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct vhost_test {
> > > > > > > +	struct vhost_dev dev;
> > > > > > > +	struct vhost_virtqueue vqs[VHOST_TEST_VQ_MAX];
> > > > > > > +};
> > > > > > > +
> > > > > > > +/* Expects to be always run from workqueue - which acts as
> > > > > > > + * read-size critical section for our kind of RCU. */
> > > > > > > +static void handle_vq(struct vhost_test *n)
> > > > > > > +{
> > > > > > > +	struct vhost_virtqueue *vq = &n->dev.vqs[VHOST_TEST_VQ];
> > > > > > > +	unsigned out, in;
> > > > > > > +	int head;
> > > > > > > +	size_t len, total_len = 0;
> > > > > > > +	void *private;
> > > > > > > +
> > > > > > > +	private = rcu_dereference_check(vq->private_data, 1);
> > > > > > 
> > > > > > Any chance of a check for running in a workqueue?  If I remember correctly,
> > > > > > the ->lockdep_map field in the work_struct structure allows you to create
> > > > > > the required lockdep expression.
> > > > > 
> > > > > We moved away from using the workqueue to a custom kernel thread
> > > > > implementation though.
> > > > 
> > > > OK, then could you please add a check for "current == custom_kernel_thread"
> > > > or some such?
> > > > 
> > > > 							Thanx, Paul
> > > 
> > > It's a bit tricky. The way we flush out things is by an analog of
> > > flush_work. So just checking that we run from workqueue isn't
> > > right need to check that we are running from one of the specific work
> > > structures that we are careful to flush.
> > > 
> > > I can do this by passing the work structure pointer on to relevant
> > > functions but I think this will add (very minor) overhead even when rcu
> > > checks are disabled. Does it matter? Thoughts?
> > 
> > Would it be possible to set up separate lockdep maps, in effect passing
> > the needed information via lockdep rather than via the function arguments?
> 
> Possibly, I don't know enough about this but will check.
> Any examples to look at?

I would suggest the workqueue example in include/linux/workqueue.h and
kernel/workqueue.c.  You will need a struct lockdep_map for each of the
specific work structures, sort of like the one in struct workqueue_struct.

You can then use lock_map_acquire() and lock_map_release() to flag the
fact that your kernel thread is running and not, and then you can pass
lock_is_held() to rcu_dereference_check().  Each of lock_map_acquire(),
lock_map_release(), and lock_is_held() takes a struct lockdep_map as
sole argument.  The rcu_lock_map definition shows an example of a global
lockdep_map variable.  If you need to do runtime initialization of your
lockdep_map structure, you can use lockdep_init_map().

Seem reasonable?

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ