lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJzYJEVJlymbLxco@dread.disaster.area>
Date:   Thu, 29 Jun 2023 11:02:28 +1000
From:   Dave Chinner <david@...morbit.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Jan Kara <jack@...e.cz>,
        Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] writeback: Account the number of pages written back

On Thu, Jun 29, 2023 at 01:01:59AM +0100, Matthew Wilcox wrote:
> On Thu, Jun 29, 2023 at 07:53:44AM +1000, Dave Chinner wrote:
> > On Wed, Jun 28, 2023 at 07:55:48PM +0100, Matthew Wilcox (Oracle) wrote:
> > > nr_to_write is a count of pages, so we need to decrease it by the number
> > > of pages in the folio we just wrote, not by 1.  Most callers specify
> > > either LONG_MAX or 1, so are unaffected, but writeback_sb_inodes()
> > > might end up writing 512x as many pages as it asked for.
> > > 
> > > Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> > > ---
> > >  mm/page-writeback.c | 8 +++++---
> > >  1 file changed, 5 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > > index 1d17fb1ec863..d3f42009bb70 100644
> > > --- a/mm/page-writeback.c
> > > +++ b/mm/page-writeback.c
> > > @@ -2434,6 +2434,7 @@ int write_cache_pages(struct address_space *mapping,
> > >  
> > >  		for (i = 0; i < nr_folios; i++) {
> > >  			struct folio *folio = fbatch.folios[i];
> > > +			unsigned long nr;
> > >  
> > >  			done_index = folio->index;
> > >  
> > > @@ -2471,6 +2472,7 @@ int write_cache_pages(struct address_space *mapping,
> > >  
> > >  			trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
> > >  			error = writepage(folio, wbc, data);
> > > +			nr = folio_nr_pages(folio);
> > 
> > This should really be done before writepage() is called, right? By
> > the time the writepage() returns, the folio can be unlocked, the
> > entire write completed and the folio partially invalidated which may
> > try to split the folio...
> > 
> > Even if this can't happen (folio refcount is elevated, right?), it
> > makes much more sense to me to sample the size of the folio while it
> > is obviously locked and not going to change...
> 
> It can't change because of the refcount we hold (that's put in the call
> to folio_batch_release()).  I didn't want to call it before the call to
> writepage() because that makes the compiler stick it on the stack instead
> of a local variable.

I don't care for micro-optimisations when the result is code
that looks dodgy and suspect and requires lots of additional
thinking about to determine that it is safe.

> Also, when we transform this into an iterator (see
> patches posted yesterday), we'd have to stash it away in the iterator.

That's no big deal, either.

-Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ