[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240607145902.1137853-7-kernel@pankajraghav.com>
Date: Fri, 7 Jun 2024 14:58:57 +0000
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: david@...morbit.com,
djwong@...nel.org,
chandan.babu@...cle.com,
brauner@...nel.org,
akpm@...ux-foundation.org,
willy@...radead.org
Cc: mcgrof@...nel.org,
linux-mm@...ck.org,
hare@...e.de,
linux-kernel@...r.kernel.org,
yang@...amperecomputing.com,
Zi Yan <zi.yan@...t.com>,
linux-xfs@...r.kernel.org,
p.raghav@...sung.com,
linux-fsdevel@...r.kernel.org,
kernel@...kajraghav.com,
hch@....de,
gost.dev@...sung.com,
cl@...amperecomputing.com,
john.g.garry@...cle.com
Subject: [PATCH v7 06/11] filemap: cap PTE range to be created to allowed zero fill in folio_map_range()
From: Pankaj Raghav <p.raghav@...sung.com>
Usually the page cache does not extend beyond the size of the inode,
therefore, no PTEs are created for folios that extend beyond the size.
But with LBS support, we might extend page cache beyond the size of the
inode as we need to guarantee folios of minimum order. Cap the PTE range
to be created for the page cache up to the max allowed zero-fill file
end, which is aligned to the PAGE_SIZE.
An fstests test has been created to trigger this edge case [0].
[0] https://lore.kernel.org/fstests/20240415081054.1782715-1-mcgrof@kernel.org/
Signed-off-by: Luis Chamberlain <mcgrof@...nel.org>
Reviewed-by: Hannes Reinecke <hare@...e.de>
Signed-off-by: Pankaj Raghav <p.raghav@...sung.com>
---
mm/filemap.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 8bb0d2bc93c5..0e48491b3d10 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3610,7 +3610,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
struct vm_area_struct *vma = vmf->vma;
struct file *file = vma->vm_file;
struct address_space *mapping = file->f_mapping;
- pgoff_t last_pgoff = start_pgoff;
+ pgoff_t file_end, last_pgoff = start_pgoff;
unsigned long addr;
XA_STATE(xas, &mapping->i_pages, start_pgoff);
struct folio *folio;
@@ -3636,6 +3636,10 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
goto out;
}
+ file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1;
+ if (end_pgoff > file_end)
+ end_pgoff = file_end;
+
folio_type = mm_counter_file(folio);
do {
unsigned long end;
--
2.44.1
Powered by blists - more mailing lists