lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100724191447.GA4972@redhat.com>
Date:	Sat, 24 Jul 2010 22:14:47 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Oleg Nesterov <oleg@...hat.com>,
	Sridhar Samudrala <sri@...ibm.com>,
	netdev <netdev@...r.kernel.org>,
	lkml <linux-kernel@...r.kernel.org>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dmitri Vorobiev <dmitri.vorobiev@...ial.com>,
	Jiri Kosina <jkosina@...e.cz>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH UPDATED 1/3] vhost: replace vhost_workqueue with
 per-vhost kthread

On Thu, Jul 22, 2010 at 11:21:40PM +0200, Tejun Heo wrote:
> Hello,
> 
> On 07/22/2010 05:58 PM, Michael S. Tsirkin wrote:
> > All the tricky barrier pairing made me uncomfortable.  So I came up with
> > this on top (untested): if we do all operations under the spinlock, we
> > can get by without barriers and atomics.  And since we need the lock for
> > list operations anyway, this should have no paerformance impact.
> > 
> > What do you think?
> 
> I've created kthread_worker in wq#for-next tree and already converted
> ivtv to use it.  Once this lands in mainline, I think converting vhost
> to use it would be better choice.  kthread worker code uses basically
> the same logic used in the vhost_workqueue code but is better
> organized and documented.  So, I think it would be better to stick
> with the original implementation, as otherwise we're likely to just
> decrease test coverage without much gain.
> 
>   http://git.kernel.org/?p=linux/kernel/git/tj/wq.git;a=commitdiff;h=b56c0d8937e665a27d90517ee7a746d0aa05af46;hp=53c5f5ba42c194cb13dd3083ed425f2c5b1ec439

Sure, if we keep using workqueue. But I'd like to investigate this
direction a bit more because there's discussion to switching from kthread to
regular threads altogether.

> > @@ -151,37 +161,37 @@ static void vhost_vq_reset(struct vhost_dev *dev,
> >  static int vhost_worker(void *data)
> >  {
> >  	struct vhost_dev *dev = data;
> > -	struct vhost_work *work;
> > +	struct vhost_work *work = NULL;
> >  
> > -repeat:
> > -	set_current_state(TASK_INTERRUPTIBLE);	/* mb paired w/ kthread_stop */
> > +	for (;;) {
> > +		set_current_state(TASK_INTERRUPTIBLE);	/* mb paired w/ kthread_stop */
> >  
> > -	if (kthread_should_stop()) {
> > -		__set_current_state(TASK_RUNNING);
> > -		return 0;
> > -	}
> > +		if (kthread_should_stop()) {
> > +			__set_current_state(TASK_RUNNING);
> > +			return 0;
> > +		}
> >  
> > -	work = NULL;
> > -	spin_lock_irq(&dev->work_lock);
> > -	if (!list_empty(&dev->work_list)) {
> > -		work = list_first_entry(&dev->work_list,
> > -					struct vhost_work, node);
> > -		list_del_init(&work->node);
> > -	}
> > -	spin_unlock_irq(&dev->work_lock);
> > +		spin_lock_irq(&dev->work_lock);
> > +		if (work) {
> > +			work->done_seq = work->queue_seq;
> > +			if (work->flushing)
> > +				wake_up_all(&work->done);
> 
> I don't think doing this before executing the function is correct,

Well, before I execute the function work is NULL, so this is skipped.
Correct?

> so
> you'll have to release the lock, execute the function, regrab the lock
> and then do the flush processing.
> 
> Thanks.

It's done in the loop, so I thought we can reuse the locking
done for the sake of processing the next work item.
Makes sense?


> -- 
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ