lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 05 Jan 2011 09:50:57 -0500
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org, Benjamin LaHaise <bcrl@...ck.org>,
	linux-aio@...ck.org
Subject: Re: [PATCH 21/32] fs/aio: aio_wq isn't used in memory reclaim path

Tejun Heo <tj@...nel.org> writes:

> Hello,
>
> On Tue, Jan 04, 2011 at 10:56:20AM -0500, Jeff Moyer wrote:
>> > aio_wq isn't used during memory reclaim.  Convert to alloc_workqueue()
>> > without WQ_MEM_RECLAIM.  It's possible to use system_wq but given that
>> > the number of work items is determined from userland and the work item
>> > may block, enforcing strict concurrency limit would be a good idea.
>> 
>> I would think that just given that it may block would be enough to keep
>> it off of the system workqueue.
>
> Oh, workqueue now can handle parallel execution.  Blocking on system
> workqueue is no longer a problem.  One of the main reasons for this
> whole series.

OK, thanks for clarifying for me.

>> > @@ -85,7 +85,7 @@ static int __init aio_setup(void)
>> >  	kiocb_cachep = KMEM_CACHE(kiocb, SLAB_HWCACHE_ALIGN|SLAB_PANIC);
>> >  	kioctx_cachep = KMEM_CACHE(kioctx,SLAB_HWCACHE_ALIGN|SLAB_PANIC);
>> >  
>> > -	aio_wq = create_workqueue("aio");
>> > +	aio_wq = alloc_workqueue("aio", 0, 1);	/* used to limit concurrency */
>> 
>> OK, the only difference here is the removal of the WQ_MEM_RECLAIM flag,
>> as you noted.
>
> Yeap.  Do you agree that the concurrency limit is necessary?  If not,
> we can just put everything onto system_wq.

I'm not sure whether it's strictly necessary (there may very well be a
need for this in the usb gadgetfs code), but keeping it the same at
least seems safe.

>> > @@ -569,7 +569,7 @@ static int __aio_put_req(struct kioctx *ctx, struct kiocb *req)
>> >  		spin_lock(&fput_lock);
>> >  		list_add(&req->ki_list, &fput_head);
>> >  		spin_unlock(&fput_lock);
>> > -		queue_work(aio_wq, &fput_work);
>> > +		schedule_work(&fput_work);
>> 
>> I'm not sure where this change fits into the patch description.  Why did
>> you do this?
>
> Yeah, that's me being forgetful.  Now that aio_wq is solely used to
> throttle the max concurrency of aio work items, I thought it would be
> better to push fput_work to system workqueue so that it doesn't
> interact with aio work items.  I'll update the patch description.

OK, thanks.

-Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ