[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56D47F90.9050903@linaro.org>
Date: Mon, 29 Feb 2016 09:27:44 -0800
From: "Shi, Yang" <yang.shi@...aro.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: tj@...nel.org, jack@...e.cz, axboe@...com, fengguang.wu@...el.com,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linaro-kernel@...ts.linaro.org
Subject: Re: [RFC PATCH] writeback: move list_lock down into the for loop
On 2/29/2016 7:06 AM, Michal Hocko wrote:
> On Fri 26-02-16 08:46:25, Yang Shi wrote:
>> The list_lock was moved outside the for loop by commit
>> e8dfc30582995ae12454cda517b17d6294175b07 ("writeback: elevate queue_io()
>> into wb_writeback())", however, the commit log says "No behavior change", so
>> it sounds safe to have the list_lock acquired inside the for loop as it did
>> before.
>> Leave tracepoints outside the critical area since tracepoints already have
>> preempt disabled.
>
> The patch says what but it completely misses the why part.
I'm just wondering the finer grained lock may reach a little better
performance, i.e. more likely for preempt, lower latency.
Thanks,
Yang
>
>>
>> Signed-off-by: Yang Shi <yang.shi@...aro.org>
>> ---
>> Tested with ltp on 8 cores Cortex-A57 machine.
>>
>> fs/fs-writeback.c | 12 +++++++-----
>> 1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
>> index 1f76d89..9b7b5f6 100644
>> --- a/fs/fs-writeback.c
>> +++ b/fs/fs-writeback.c
>> @@ -1623,7 +1623,6 @@ static long wb_writeback(struct bdi_writeback *wb,
>> work->older_than_this = &oldest_jif;
>>
>> blk_start_plug(&plug);
>> - spin_lock(&wb->list_lock);
>> for (;;) {
>> /*
>> * Stop writeback when nr_pages has been consumed
>> @@ -1661,15 +1660,19 @@ static long wb_writeback(struct bdi_writeback *wb,
>> oldest_jif = jiffies;
>>
>> trace_writeback_start(wb, work);
>> +
>> + spin_lock(&wb->list_lock);
>> if (list_empty(&wb->b_io))
>> queue_io(wb, work);
>> if (work->sb)
>> progress = writeback_sb_inodes(work->sb, wb, work);
>> else
>> progress = __writeback_inodes_wb(wb, work);
>> - trace_writeback_written(wb, work);
>>
>> wb_update_bandwidth(wb, wb_start);
>> + spin_unlock(&wb->list_lock);
>> +
>> + trace_writeback_written(wb, work);
>>
>> /*
>> * Did we write something? Try for more
>> @@ -1693,15 +1696,14 @@ static long wb_writeback(struct bdi_writeback *wb,
>> */
>> if (!list_empty(&wb->b_more_io)) {
>> trace_writeback_wait(wb, work);
>> + spin_lock(&wb->list_lock);
>> inode = wb_inode(wb->b_more_io.prev);
>> - spin_lock(&inode->i_lock);
>> spin_unlock(&wb->list_lock);
>> + spin_lock(&inode->i_lock);
>> /* This function drops i_lock... */
>> inode_sleep_on_writeback(inode);
>> - spin_lock(&wb->list_lock);
>> }
>> }
>> - spin_unlock(&wb->list_lock);
>> blk_finish_plug(&plug);
>>
>> return nr_pages - work->nr_pages;
>> --
>> 2.0.2
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@...ck.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
>
Powered by blists - more mailing lists