[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1253107494-20160-15-git-send-email-jens.axboe@oracle.com>
Date: Wed, 16 Sep 2009 15:24:52 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Cc: chris.mason@...cle.com, hch@...radead.org, tytso@....edu,
akpm@...ux-foundation.org, jack@...e.cz,
trond.myklebust@....uio.no, Nick Piggin <npiggin@...e.de>,
Jens Axboe <jens.axboe@...cle.com>
Subject: [PATCH 14/16] writeback: improve scalability of bdi writeback work queues
From: Nick Piggin <npiggin@...e.de>
If you're going to do an atomic RMW on each list entry, there's not much
point in all the RCU complexities of the list walking. This is only going
to help the multi-thread case I guess, but it doesn't hurt to do now.
Signed-off-by: Nick Piggin <npiggin@...e.de>
Signed-off-by: Jens Axboe <jens.axboe@...cle.com>
---
fs/fs-writeback.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 59c99e7..6bca6f8 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -772,8 +772,9 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
rcu_read_lock();
list_for_each_entry_rcu(work, &bdi->work_list, list) {
- if (!test_and_clear_bit(wb->nr, &work->seen))
+ if (!test_bit(wb->nr, &work->seen))
continue;
+ clear_bit(wb->nr, &work->seen);
ret = work;
break;
--
1.6.4.1.207.g68ea
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists