lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <mxh6husr25uw6u7wgp4p3stqcsxh6uek2hjktfwof3z6ayzdjr@4t4s3deim7dd> Date: Tue, 12 Nov 2024 11:31:48 +0200 From: "Kirill A. Shutemov" <kirill@...temov.name> To: Jens Axboe <axboe@...nel.dk> Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, hannes@...xchg.org, clm@...a.com, linux-kernel@...r.kernel.org, willy@...radead.org, linux-btrfs@...r.kernel.org, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org Subject: Re: [PATCH 09/16] mm/filemap: drop uncached pages when writeback completes On Mon, Nov 11, 2024 at 04:37:36PM -0700, Jens Axboe wrote: > If the folio is marked as uncached, drop pages when writeback completes. > Intended to be used with RWF_UNCACHED, to avoid needing sync writes for > uncached IO. > > Signed-off-by: Jens Axboe <axboe@...nel.dk> > --- > mm/filemap.c | 28 ++++++++++++++++++++++++++++ > 1 file changed, 28 insertions(+) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 3d0614ea5f59..40debe742abe 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1600,6 +1600,27 @@ int folio_wait_private_2_killable(struct folio *folio) > } > EXPORT_SYMBOL(folio_wait_private_2_killable); > > +/* > + * If folio was marked as uncached, then pages should be dropped when writeback > + * completes. Do that now. If we fail, it's likely because of a big folio - > + * just reset uncached for that case and latter completions should invalidate. > + */ > +static void folio_end_uncached(struct folio *folio) > +{ > + /* > + * Hitting !in_task() should not happen off RWF_UNCACHED writeback, but > + * can happen if normal writeback just happens to find dirty folios > + * that were created as part of uncached writeback, and that writeback > + * would otherwise not need non-IRQ handling. Just skip the > + * invalidation in that case. > + */ > + if (in_task() && folio_trylock(folio)) { > + if (folio->mapping) > + folio_unmap_invalidate(folio->mapping, folio, 0); > + folio_unlock(folio); > + } > +} > + > /** > * folio_end_writeback - End writeback against a folio. > * @folio: The folio. > @@ -1610,6 +1631,8 @@ EXPORT_SYMBOL(folio_wait_private_2_killable); > */ > void folio_end_writeback(struct folio *folio) > { > + bool folio_uncached = false; > + > VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio); > > /* > @@ -1631,9 +1654,14 @@ void folio_end_writeback(struct folio *folio) > * reused before the folio_wake_bit(). > */ > folio_get(folio); > + if (folio_test_uncached(folio) && folio_test_clear_uncached(folio)) > + folio_uncached = true; Hm? Maybe folio_uncached = folio_test_clear_uncached(folio); ? > if (__folio_end_writeback(folio)) > folio_wake_bit(folio, PG_writeback); > acct_reclaim_writeback(folio); > + > + if (folio_uncached) > + folio_end_uncached(folio); > folio_put(folio); > } > EXPORT_SYMBOL(folio_end_writeback); > -- > 2.45.2 > -- Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists