[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA9v8mFCbp6XTLvC=eY1+3rAQ51vPik2MoG1CBqEMnE_y_H0MA@mail.gmail.com>
Date: Fri, 26 Oct 2012 12:35:27 +0800
From: YingHang Zhu <casualfisher@...il.com>
To: Ni zhan Chen <nizhan.chen@...il.com>
Cc: akpm@...ux-foundation.org, Fengguang Wu <fengguang.wu@...el.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH] mm: readahead: remove redundant ra_pages in file_ra_state
On Fri, Oct 26, 2012 at 11:51 AM, Ni zhan Chen <nizhan.chen@...il.com> wrote:
> On 10/26/2012 11:28 AM, YingHang Zhu wrote:
>>
>> On Fri, Oct 26, 2012 at 10:30 AM, Ni zhan Chen <nizhan.chen@...il.com>
>> wrote:
>>>
>>> On 10/26/2012 09:27 AM, Fengguang Wu wrote:
>>>>
>>>> On Fri, Oct 26, 2012 at 11:25:44AM +1100, Dave Chinner wrote:
>>>>>
>>>>> On Thu, Oct 25, 2012 at 10:58:26AM +0800, Fengguang Wu wrote:
>>>>>>
>>>>>> Hi Chen,
>>>>>>
>>>>>>> But how can bdi related ra_pages reflect different files' readahead
>>>>>>> window? Maybe these different files are sequential read, random read
>>>>>>> and so on.
>>>>>>
>>>>>> It's simple: sequential reads will get ra_pages readahead size while
>>>>>> random reads will not get readahead at all.
>>>>>>
>>>>>> Talking about the below chunk, it might hurt someone that explicitly
>>>>>> takes advantage of the behavior, however the ra_pages*2 seems more
>>>>>> like a hack than general solution to me: if the user will need
>>>>>> POSIX_FADV_SEQUENTIAL to double the max readahead window size for
>>>>>> improving IO performance, then why not just increase bdi->ra_pages and
>>>>>> benefit all reads? One may argue that it offers some differential
>>>>>> behavior to specific applications, however it may also present as a
>>>>>> counter-optimization: if the root already tuned bdi->ra_pages to the
>>>>>> optimal size, the doubled readahead size will only cost more memory
>>>>>> and perhaps IO latency.
>>>>>>
>>>>>> --- a/mm/fadvise.c
>>>>>> +++ b/mm/fadvise.c
>>>>>> @@ -87,7 +86,6 @@ SYSCALL_DEFINE(fadvise64_64)(int fd, loff_t offset,
>>>>>> loff_t len, int advice)
>>>>>> spin_unlock(&file->f_lock);
>>>>>> break;
>>>>>> case POSIX_FADV_SEQUENTIAL:
>>>>>> - file->f_ra.ra_pages = bdi->ra_pages * 2;
>>>>>
>>>>> I think we really have to reset file->f_ra.ra_pages here as it is
>>>>> not a set-and-forget value. e.g. shrink_readahead_size_eio() can
>>>>> reduce ra_pages as a result of IO errors. Hence if you have had io
>>>>> errors, telling the kernel that you are now going to do sequential
>>>>> IO should reset the readahead to the maximum ra_pages value
>>>>> supported....
>>>>
>>>> Good point!
>>>>
>>>> .... but wait .... this patch removes file->f_ra.ra_pages in all other
>>>> places too, so there will be no file->f_ra.ra_pages to be reset here...
>>>
>>>
>>> In his patch,
>>>
>>>
>>> static void shrink_readahead_size_eio(struct file *filp,
>>> struct file_ra_state *ra)
>>> {
>>> - ra->ra_pages /= 4;
>>> + spin_lock(&filp->f_lock);
>>> + filp->f_mode |= FMODE_RANDOM;
>>> + spin_unlock(&filp->f_lock);
>>>
>>> As the example in comment above this function, the read maybe still
>>> sequential, and it will waste IO bandwith if modify to FMODE_RANDOM
>>> directly.
>>
>> I've considered about this. On the first try I modified file_ra_state.size
>> and
>> file_ra_state.async_size directly, like
>>
>> file_ra_state.async_size = 0;
>> file_ra_state.size /= 4;
>>
>> but as what I comment here, we can not
>> predict whether the bad sectors will trash the readahead window, maybe the
>> following sectors after current one are ok to go in normal readahead,
>> it's hard to know,
>> the current approach gives us a chance to slow down softly.
>
>
> Then when will check filp->f_mode |= FMODE_RANDOM; ? Does it will influence
> ra->ra_pages?
You can find the relevant information in function page_cache_sync_readahead.
Thanks,
Ying Zhu
>
>
>>
>> Thanks,
>> Ying Zhu
>>>>
>>>> Thanks,
>>>> Fengguang
>>>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists