[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <You3Q/VFaCoS0mC8@google.com>
Date: Mon, 23 May 2022 09:33:07 -0700
From: Minchan Kim <minchan@...nel.org>
To: John Hubbard <jhubbard@...dia.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>,
"Paul E. McKenney" <paulmck@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
John Dias <joaodias@...gle.com>,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page
On Tue, May 17, 2022 at 01:12:02PM -0700, John Hubbard wrote:
> On 5/17/22 12:28, Jason Gunthorpe wrote:
> > > If you compare this to the snippet above, you'll see that there is
> > > an extra mov statement, and that one dereferences a pointer from
> > > %rax:
> > >
> > > mov (%rax),%rbx
> >
> > That is the same move as:
> >
> > mov 0x8(%rdx,%rax,8),%rbx
> >
> > Except that the EA calculation was done in advance and stored in rax.
> >
> > lea isn't a memory reference, it is just computing the pointer value
> > that 0x8(%rdx,%rax,8) represents. ie the lea computes
> >
> > %rax = %rdx + %rax*8 + 6
> >
> > Which is then fed into the mov. Maybe it is an optimization to allow
> > one pipe to do the shr and an other to the EA - IDK, seems like a
> > random thing for the compiler to do.
>
> Apologies for getting that wrong, and thanks for walking me through the
> asm.
>
> [...]
> >
> > Paul can correct me, but I understand we do not have a list of allowed
> > operations that are exempted from the READ_ONCE() requirement. ie it
> > is not just conditional branching that requires READ_ONCE().
> >
> > This is why READ_ONCE() must always be on the memory load, because the
> > point is to sanitize away the uncertainty that comes with an unlocked
> > read of unstable memory contents. READ_ONCE() samples the value in
> > memory, and removes all tearing, multiload, etc "instability" that may
> > effect down stream computations. In this way down stream compulations
> > become reliable.
> >
> > Jason
>
> So then:
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0e42038382c1..b404f87e2682 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -482,7 +482,12 @@ unsigned long __get_pfnblock_flags_mask(const struct page *page,
> word_bitidx = bitidx / BITS_PER_LONG;
> bitidx &= (BITS_PER_LONG-1);
>
> - word = bitmap[word_bitidx];
> + /*
> + * This races, without locks, with set_pageblock_migratetype(). Ensure
set_pfnblock_flags_mask would be better?
> + * a consistent (non-tearing) read of the memory array, so that results,
Thanks for proceeding and suggestion, John.
IIUC, the load tearing wouldn't be an issue since [1] fixed the issue.
The concern in our dicussion was aggressive compiler(e.g., LTO) or code refactoring
to make the code inline in *future* could potentially cause forcing refetching(i.e.,
re-read) tie bitmap[word_bitidx].
If so, shouldn't the comment be the one you helped before?
/*
* Defend against future compiler LTO features, or code refactoring
* that inlines the above function, by forcing a single read. Because,
* re-reads of bitmap[word_bitidx] by inlining could cause trouble
* for whom believe they use a local variable for the value.
*/
[1] e58469bafd05, mm: page_alloc: use word-based accesses for get/set pageblock bitmaps
> + * even though racy, are not corrupted.
> + */
> + word = READ_ONCE(bitmap[word_bitidx]);
> return (word >> bitidx) & mask;
> }
>
>
> thanks,
> --
> John Hubbard
> NVIDIA
Powered by blists - more mailing lists