[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230418061933.3282785-1-stevensd@google.com>
Date: Tue, 18 Apr 2023 15:19:33 +0900
From: David Stevens <stevensd@...omium.org>
To: linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Suleiman Souhlal <suleiman@...gle.com>,
linux-kernel@...r.kernel.org,
David Stevens <stevensd@...omium.org>, stable@...r.kernel.org
Subject: [PATCH] mm/shmem: Fix race in shmem_undo_range w/THP
From: David Stevens <stevensd@...omium.org>
Split folios during the second loop of shmem_undo_range. It's not
sufficient to only split folios when dealing with partial pages, since
it's possible for a THP to be faulted in after that point. Calling
truncate_inode_folio in that situation can result in throwing away data
outside of the range being targeted.
Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios")
Cc: stable@...r.kernel.org
Signed-off-by: David Stevens <stevensd@...omium.org>
---
mm/shmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 9218c955f482..317cbeb0fb6b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1033,7 +1033,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
}
VM_BUG_ON_FOLIO(folio_test_writeback(folio),
folio);
- truncate_inode_folio(mapping, folio);
+ truncate_inode_partial_folio(folio, lstart, lend);
}
folio_unlock(folio);
}
--
2.40.0.634.g4ca3ef3211-goog
Powered by blists - more mailing lists