lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <9a5474b6-aaac-4567-9405-351d6755f947@kernel.dk> Date: Tue, 12 Nov 2024 07:09:02 -0700 From: Jens Axboe <axboe@...nel.dk> To: "Kirill A. Shutemov" <kirill@...temov.name> Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, hannes@...xchg.org, clm@...a.com, linux-kernel@...r.kernel.org, willy@...radead.org, linux-btrfs@...r.kernel.org, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org Subject: Re: [PATCH 09/16] mm/filemap: drop uncached pages when writeback completes On 11/12/24 2:31 AM, Kirill A. Shutemov wrote: > On Mon, Nov 11, 2024 at 04:37:36PM -0700, Jens Axboe wrote: >> If the folio is marked as uncached, drop pages when writeback completes. >> Intended to be used with RWF_UNCACHED, to avoid needing sync writes for >> uncached IO. >> >> Signed-off-by: Jens Axboe <axboe@...nel.dk> >> --- >> mm/filemap.c | 28 ++++++++++++++++++++++++++++ >> 1 file changed, 28 insertions(+) >> >> diff --git a/mm/filemap.c b/mm/filemap.c >> index 3d0614ea5f59..40debe742abe 100644 >> --- a/mm/filemap.c >> +++ b/mm/filemap.c >> @@ -1600,6 +1600,27 @@ int folio_wait_private_2_killable(struct folio *folio) >> } >> EXPORT_SYMBOL(folio_wait_private_2_killable); >> >> +/* >> + * If folio was marked as uncached, then pages should be dropped when writeback >> + * completes. Do that now. If we fail, it's likely because of a big folio - >> + * just reset uncached for that case and latter completions should invalidate. >> + */ >> +static void folio_end_uncached(struct folio *folio) >> +{ >> + /* >> + * Hitting !in_task() should not happen off RWF_UNCACHED writeback, but >> + * can happen if normal writeback just happens to find dirty folios >> + * that were created as part of uncached writeback, and that writeback >> + * would otherwise not need non-IRQ handling. Just skip the >> + * invalidation in that case. >> + */ >> + if (in_task() && folio_trylock(folio)) { >> + if (folio->mapping) >> + folio_unmap_invalidate(folio->mapping, folio, 0); >> + folio_unlock(folio); >> + } >> +} >> + >> /** >> * folio_end_writeback - End writeback against a folio. >> * @folio: The folio. >> @@ -1610,6 +1631,8 @@ EXPORT_SYMBOL(folio_wait_private_2_killable); >> */ >> void folio_end_writeback(struct folio *folio) >> { >> + bool folio_uncached = false; >> + >> VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); >> >> /* >> @@ -1631,9 +1654,14 @@ void folio_end_writeback(struct folio *folio) >> * reused before the folio_wake_bit(). >> */ >> folio_get(folio); >> + if (folio_test_uncached(folio) && folio_test_clear_uncached(folio)) >> + folio_uncached = true; > > Hm? Maybe > > folio_uncached = folio_test_clear_uncached(folio); > > ? It's done that way to avoid a RMW for the (for now, at least) common case of not seeing cached folios. For that case, you can get by with a cheap test_bit, for the cached case you pay the full price of the test_clear. Previous versions just had the test_clear, happy to just go back or add a comment, whatever is preferred. -- Jens Axboe
Powered by blists - more mailing lists