lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Nov 2010 16:20:35 +0000
From:	Steven Whitehouse <swhiteho@...hat.com>
To:	David Teigland <teigland@...hat.com>
Cc:	cluster-devel@...hat.com, linux-kernel@...r.kernel.org,
	Tejun Heo <tj@...nel.org>
Subject: Re: dlm: Use cmwq for send and receive workqueues

Hi,

On Fri, 2010-11-12 at 11:12 -0500, David Teigland wrote:
> On Fri, Nov 12, 2010 at 12:12:29PM +0000, Steven Whitehouse wrote:
> > 
> > So far as I can tell, there is no reason to use a single-threaded
> > send workqueue for dlm, since it may need to send to several sockets
> > concurrently. Both workqueues are set to WQ_MEM_RECLAIM to avoid
> > any possible deadlocks, WQ_HIGHPRI since locking traffic is highly
> > latency sensitive (and to avoid a priority inversion wrt GFS2's
> > glock_workqueue) and WQ_FREEZABLE just in case someone needs to do
> > that (even though with current cluster infrastructure, it doesn't
> > make sense as the node will most likely land up ejected from the
> > cluster) in the future.
> 
> Thanks, I'll want to do some testing with this, but my test machines do
> not seem to create more than one dlm_recv workqueue thread (prior to this
> patch).  Have you tested in any cases where many threads end up being
> created?  I've noticed while debugging some many-cpu machines a huge
> number of dlm_recv threads, which is just excessive.  Does this patch
> address that?
> 
> 
Yes, one of the features of the cmwq is that you land up with only as
many threads as required. When threads block, new ones are created to
avoid stalling the workqueue. Workqueues marked with WQ_MEM_RECLAIM
create a single rescuer thread, otherwise the threads are shared with
all other users of cmwq,

Steve.

> > Signed-off-by: Steven Whitehouse <swhiteho@...hat.com>
> > Cc: Tejun Heo <tj@...nel.org>
> > 
> > diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
> > index 37a34c2..0893b30 100644
> > --- a/fs/dlm/lowcomms.c
> > +++ b/fs/dlm/lowcomms.c
> > @@ -1431,14 +1431,16 @@ static void work_stop(void)
> >  static int work_start(void)
> >  {
> >  	int error;
> > -	recv_workqueue = create_workqueue("dlm_recv");
> > +	recv_workqueue = alloc_workqueue("dlm_recv", WQ_MEM_RECLAIM |
> > +					 WQ_HIGHPRI | WQ_FREEZEABLE, 0);
> >  	error = IS_ERR(recv_workqueue);
> >  	if (error) {
> >  		log_print("can't start dlm_recv %d", error);
> >  		return error;
> >  	}
> >  
> > -	send_workqueue = create_singlethread_workqueue("dlm_send");
> > +	send_workqueue = alloc_workqueue("dlm_send", WQ_MEM_RECLAIM |
> > +					 WQ_HIGHPRI | WQ_FREEZEABLE, 0);
> >  	error = IS_ERR(send_workqueue);
> >  	if (error) {
> >  		log_print("can't start dlm_send %d", error);
> > 
> > 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ