[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.2011170820030.1014@eggly.anvils>
Date: Tue, 17 Nov 2020 08:26:03 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Matthew Wilcox <willy@...radead.org>
cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.cz>,
William Kucharski <william.kucharski@...cle.com>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org, hch@....de,
hannes@...xchg.org, yang.shi@...ux.alibaba.com,
dchinner@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 00/16] Overhaul multi-page lookups for THP
On Tue, 17 Nov 2020, Matthew Wilcox wrote:
> On Mon, Nov 16, 2020 at 02:34:34AM -0800, Hugh Dickins wrote:
> > Fix to [PATCH v4 15/16] mm/truncate,shmem: Handle truncates that split THPs.
> > One machine ran fine, swapping and building in ext4 on loop0 on huge tmpfs;
> > one machine got occasional pages of zeros in its .os; one machine couldn't
> > get started because of ext4_find_dest_de errors on the newly mkfs'ed fs.
> > The partial_end case was decided by PAGE_SIZE, when there might be a THP
> > there. The below patch has run well (for not very long), but I could
> > easily have got it slightly wrong, off-by-one or whatever; and I have
> > not looked into the similar code in mm/truncate.c, maybe that will need
> > a similar fix or maybe not.
>
> Thank you for the explanation in your later email! There is indeed an
> off-by-one, although in the safe direction.
>
> > --- 5103w/mm/shmem.c 2020-11-12 15:46:21.075254036 -0800
> > +++ 5103wh/mm/shmem.c 2020-11-16 01:09:35.431677308 -0800
> > @@ -874,7 +874,7 @@ static void shmem_undo_range(struct inod
> > long nr_swaps_freed = 0;
> > pgoff_t index;
> > int i;
> > - bool partial_end;
> > + bool same_page;
> >
> > if (lend == -1)
> > end = -1; /* unsigned, so actually very big */
> > @@ -907,16 +907,12 @@ static void shmem_undo_range(struct inod
> > index++;
> > }
> >
> > - partial_end = ((lend + 1) % PAGE_SIZE) > 0;
> > + same_page = (lstart >> PAGE_SHIFT) == end;
>
> 'end' is exclusive, so this is always false. Maybe something "obvious":
>
> same_page = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);
>
> (lend is inclusive, so lend in 0-4095 are all on the same page)
My brain is not yet in gear this morning, so I haven't given this the
necessary thought: but I do have to question what you say there, and
throw it back to you for the further thought -
the first shmem_getpage(inode, lstart >> PAGE_SHIFT, &page, SGP_READ);
the second shmem_getpage(inode, end, &page, SGP_READ).
So same_page = (lstart >> PAGE_SHIFT) == end
had seemed right to me.
>
> > page = NULL;
> > shmem_getpage(inode, lstart >> PAGE_SHIFT, &page, SGP_READ);
> > if (page) {
> > - bool same_page;
> > -
> > page = thp_head(page);
> > same_page = lend < page_offset(page) + thp_size(page);
> > - if (same_page)
> > - partial_end = false;
> > set_page_dirty(page);
> > if (!truncate_inode_partial_page(page, lstart, lend)) {
> > start = page->index + thp_nr_pages(page);
> > @@ -928,7 +924,7 @@ static void shmem_undo_range(struct inod
> > page = NULL;
> > }
> >
> > - if (partial_end)
> > + if (!same_page)
> > shmem_getpage(inode, end, &page, SGP_READ);
> > if (page) {
> > page = thp_head(page);
>
Powered by blists - more mailing lists