[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1294062595-30097-22-git-send-email-tj@kernel.org>
Date: Mon, 3 Jan 2011 14:49:44 +0100
From: Tejun Heo <tj@...nel.org>
To: linux-kernel@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>, Benjamin LaHaise <bcrl@...ck.org>,
linux-aio@...ck.org
Subject: [PATCH 21/32] fs/aio: aio_wq isn't used in memory reclaim path
aio_wq isn't used during memory reclaim. Convert to alloc_workqueue()
without WQ_MEM_RECLAIM. It's possible to use system_wq but given that
the number of work items is determined from userland and the work item
may block, enforcing strict concurrency limit would be a good idea.
Signed-off-by: Tejun Heo <tj@...nel.org>
Cc: Benjamin LaHaise <bcrl@...ck.org>
Cc: linux-aio@...ck.org
---
Please feel free to take it into the subsystem tree or simply ack -
I'll route it through the wq tree.
Thanks.
fs/aio.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index 8c8f6c5..dc3fcbb 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -85,7 +85,7 @@ static int __init aio_setup(void)
kiocb_cachep = KMEM_CACHE(kiocb, SLAB_HWCACHE_ALIGN|SLAB_PANIC);
kioctx_cachep = KMEM_CACHE(kioctx,SLAB_HWCACHE_ALIGN|SLAB_PANIC);
- aio_wq = create_workqueue("aio");
+ aio_wq = alloc_workqueue("aio", 0, 1); /* used to limit concurrency */
abe_pool = mempool_create_kmalloc_pool(1, sizeof(struct aio_batch_entry));
BUG_ON(!abe_pool);
@@ -569,7 +569,7 @@ static int __aio_put_req(struct kioctx *ctx, struct kiocb *req)
spin_lock(&fput_lock);
list_add(&req->ki_list, &fput_head);
spin_unlock(&fput_lock);
- queue_work(aio_wq, &fput_work);
+ schedule_work(&fput_work);
} else {
req->ki_filp = NULL;
really_put_req(ctx, req);
--
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists