lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 May 2022 11:12:37 -0700
From:   John Hubbard <jhubbard@...dia.com>
To:     Jason Gunthorpe <jgg@...pe.ca>, Minchan Kim <minchan@...nel.org>
Cc:     "Paul E. McKenney" <paulmck@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        John Dias <joaodias@...gle.com>,
        David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page

On 5/17/22 07:00, Jason Gunthorpe wrote:
>>> It does change the generated code slightly. I don't know if this will
>>> affect performance here or not. But just for completeness, here you go:
>>>
>>> free_one_page() originally has this (just showing the changed parts):
>>>
>>>      mov    0x8(%rdx,%rax,8),%rbx
>>>      and    $0x3f,%ecx
>>>      shr    %cl,%rbx
>>>      and    $0x7,
>>>
>>>
>>> And after applying this diff:
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 0e42038382c1..df1f8e9a294f 100644
>>> +++ b/mm/page_alloc.c
>>> @@ -482,7 +482,7 @@ unsigned long __get_pfnblock_flags_mask(const struct
>>> page *page,
>>>          word_bitidx = bitidx / BITS_PER_LONG;
>>>          bitidx &= (BITS_PER_LONG-1);
>>>
>>> -       word = bitmap[word_bitidx];
>>> +       word = READ_ONCE(bitmap[word_bitidx]);
>>>          return (word >> bitidx) & mask;
>>>   }
>>>
>>>
>>> ...it now does an extra memory dereference:
>>>
>>>      lea    0x8(%rdx,%rax,8),%rax
>>>      and    $0x3f,%ecx
>>>      mov    (%rax),%rbx
>>>      shr    %cl,%rbx
>>>      and    $0x7,%ebx
> 
> Where is the extra memory reference? 'lea' is not a memory reference,
> it is just some maths?

If you compare this to the snippet above, you'll see that there is
an extra mov statement, and that one dereferences a pointer from
%rax:

     mov    (%rax),%rbx

> 
>> Thanks for checking, John.
>>
>> I don't want to have the READ_ONCE in __get_pfnblock_flags_mask
>> atm even though it's an extra memory dereference for specific
>> architecutre and specific compiler unless other callsites *do*
>> need it.
> 
> If a callpath is called under locking or not under locking then I
> would expect to have two call chains clearly marked what their locking
> conditions are. ie __get_pfn_block_flags_mask_unlocked() - and

__get_pfn_block_flags_mask_unlocked() would definitely clarify things,
and allow some clear documentation, good idea.

I haven't checked to see if some code could keep using the normal
__get_pfn_block_flags_mask(), but if it could, that would help with the
problem of keeping the fast path fast.

> obviously clearly document and check what the locking requirements are
> of the locked path.
> 
> IMHO putting a READ_ONCE on something that is not a memory load from
> shared data is nonsense - if a simple == has a stability risk then so
> does the '(word >> bitidx) & mask'.
> 
> Jason

Doing something like this:

     int __x = y();
     int x = READ_ONCE(__x);

is just awful! I agree. Really, y() should handle any barriers, because
otherwise it really does look pointless, and people reading the code
need something that is clearer. My first reaction was that this was
pointless and wrong, and it turns out that that's only about 80% true:
as long as LTO-of-the-future doesn't arrive, and as long as no one
refactors y() to be inline.


thanks,
-- 
John Hubbard
NVIDIA

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ