[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA9v8mFV13rey4O3MW4122k163+UgcSLCsp1CkrFVDf-0iWzVw@mail.gmail.com>
Date: Fri, 26 Oct 2012 13:00:43 +0800
From: YingHang Zhu <casualfisher@...il.com>
To: Fengguang Wu <fengguang.wu@...el.com>
Cc: akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Dave Chinner <david@...morbit.com>,
Ni zhan Chen <nizhan.chen@...il.com>
Subject: Re: [PATCH] mm: readahead: remove redundant ra_pages in file_ra_state
On Fri, Oct 26, 2012 at 11:55 AM, Fengguang Wu <fengguang.wu@...el.com> wrote:
> On Fri, Oct 26, 2012 at 11:38:11AM +0800, YingHang Zhu wrote:
>> On Fri, Oct 26, 2012 at 8:25 AM, Dave Chinner <david@...morbit.com> wrote:
>> > On Thu, Oct 25, 2012 at 10:58:26AM +0800, Fengguang Wu wrote:
>> >> Hi Chen,
>> >>
>> >> > But how can bdi related ra_pages reflect different files' readahead
>> >> > window? Maybe these different files are sequential read, random read
>> >> > and so on.
>> >>
>> >> It's simple: sequential reads will get ra_pages readahead size while
>> >> random reads will not get readahead at all.
>> >>
>> >> Talking about the below chunk, it might hurt someone that explicitly
>> >> takes advantage of the behavior, however the ra_pages*2 seems more
>> >> like a hack than general solution to me: if the user will need
>> >> POSIX_FADV_SEQUENTIAL to double the max readahead window size for
>> >> improving IO performance, then why not just increase bdi->ra_pages and
>> >> benefit all reads? One may argue that it offers some differential
>> >> behavior to specific applications, however it may also present as a
>> >> counter-optimization: if the root already tuned bdi->ra_pages to the
>> >> optimal size, the doubled readahead size will only cost more memory
>> >> and perhaps IO latency.
>> >>
>> >> --- a/mm/fadvise.c
>> >> +++ b/mm/fadvise.c
>> >> @@ -87,7 +86,6 @@ SYSCALL_DEFINE(fadvise64_64)(int fd, loff_t offset, loff_t len, int advice)
>> >> spin_unlock(&file->f_lock);
>> >> break;
>> >> case POSIX_FADV_SEQUENTIAL:
>> >> - file->f_ra.ra_pages = bdi->ra_pages * 2;
>> >
>> > I think we really have to reset file->f_ra.ra_pages here as it is
>> > not a set-and-forget value. e.g. shrink_readahead_size_eio() can
>> > reduce ra_pages as a result of IO errors. Hence if you have had io
>> > errors, telling the kernel that you are now going to do sequential
>> > IO should reset the readahead to the maximum ra_pages value
>> > supported....
>> If we unify file->f_ra.ra_pages and its' bdi->ra_pages, then the error-prone
>> device's readahead can be directly tuned or turned off with blockdev
>> thus affect all files
>> using the device and without bring more complexity...
>
> It's not really feasible/convenient for the end users to hand tune
> blockdev readahead size on IO errors. Even many administrators are
> totally unaware of the readahead size parameter.
You are right, so the problem comes in this way:
If one file's read failure will affect other files? I mean for
rotating disks and discs,
a file's read failure may be due to the bad sectors which tend to be consecutive
and won't affect other files' reading status. However for tape drive
the read failure
usually indicates data corruption and other file's reading may also fail.
In other words, should we consider how many files failed to read data and
where they failed as a factor to indicate the status of the backing device,
or treat these files independently?
If we choose the previous one we can accumulate the statistics and
change bdi.ra_pages,
otherwise we may do some check for FMODE_RANDOM before we change the readahead
window.
I may missed something, please point it out.
Thanks,
Ying Zhu
>
> Thanks,
> Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists