[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjmLgo7DQT7Cy5rAGd=+2OK5Lqa8BN9qJFW1NPRoDfx5A@mail.gmail.com>
Date: Mon, 28 Oct 2019 13:39:46 +0100
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc: Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH] mm/filemap: do not allocate cache pages beyond end of
file at read
On Mon, Oct 28, 2019 at 10:59 AM Konstantin Khlebnikov
<khlebnikov@...dex-team.ru> wrote:
>
> Page cache could contain pages beyond end of file during write or
> if read races with truncate. But generic_file_buffered_read() always
> allocates unneeded pages beyond eof if somebody reads here and one
> extra page at the end if file size is page-aligned.
I wonder if we could just do something like this instead:
diff --git a/mm/filemap.c b/mm/filemap.c
index 85b7d087eb45..80b08433c93a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2013,7 +2013,7 @@ static ssize_t generic_file_buffered_read(
struct address_space *mapping = filp->f_mapping;
struct inode *inode = mapping->host;
struct file_ra_state *ra = &filp->f_ra;
- loff_t *ppos = &iocb->ki_pos;
+ loff_t *ppos = &iocb->ki_pos, size;
pgoff_t index;
pgoff_t last_index;
pgoff_t prev_index;
@@ -2021,9 +2021,10 @@ static ssize_t generic_file_buffered_read(
unsigned int prev_offset;
int error = 0;
- if (unlikely(*ppos >= inode->i_sb->s_maxbytes))
+ size = i_size_read(inode);
+ if (unlikely(*ppos >= size))
return 0;
- iov_iter_truncate(iter, inode->i_sb->s_maxbytes);
+ iov_iter_truncate(iter, size);
index = *ppos >> PAGE_SHIFT;
prev_index = ra->prev_pos >> PAGE_SHIFT;
and yes, we still need to re-check the inode size after we've read the
page cache page (since it might have changed during the IO), but the
above seems fairly benign and simple.
Hmm?
Linus
Powered by blists - more mailing lists