lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1FD8D6EF-B7E9-4200-931C-3CCD81605741@linux.dev>
Date:   Tue, 1 Nov 2022 17:29:49 +0800
From:   Muchun Song <muchun.song@...ux.dev>
To:     Catalin Marinas <catalin.marinas@....com>
Cc:     Anshuman Khandual <anshuman.khandual@....com>,
        Wupeng Ma <mawupeng1@...wei.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Muchun Song <songmuchun@...edance.com>,
        Michal Hocko <mhocko@...e.com>,
        Oscar Salvador <osalvador@...e.de>,
        Linux Memory Management List <linux-mm@...ck.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next 1/1] mm: hugetlb_vmemmap: Fix WARN_ON in
 vmemmap_remap_pte



> On Oct 28, 2022, at 23:53, Catalin Marinas <catalin.marinas@....com> wrote:
> 
> On Fri, Oct 28, 2022 at 10:45:09AM +0800, Muchun Song wrote:
>> On Oct 27, 2022, at 18:50, Catalin Marinas <catalin.marinas@....com> wrote:
>>> On Wed, Oct 26, 2022 at 02:06:00PM +0530, Anshuman Khandual wrote:
>>>> On 10/26/22 12:31, Muchun Song wrote:
>>>>>> On 10/25/22 12:06, Muchun Song wrote:
>>>>>>>> On Oct 25, 2022, at 09:42, Wupeng Ma <mawupeng1@...wei.com> wrote:
>>>>>>>> From: Ma Wupeng <mawupeng1@...wei.com>
>>>>>>>> 
>>>>>>>> Commit f41f2ed43ca5 ("mm: hugetlb: free the vmemmap pages associated with
>>>>>>>> each HugeTLB page") add vmemmap_remap_pte to remap the tail pages as
>>>>>>>> read-only to catch illegal write operation to the tail page.
>>>>>>>> 
>>>>>>>> However this will lead to WARN_ON in arm64 in __check_racy_pte_update()
>>>>>>> 
>>>>>>> Thanks for your finding this issue.
>>>>>>> 
>>>>>>>> since this may lead to dirty state cleaned. This check is introduced by
>>>>>>>> commit 2f4b829c625e ("arm64: Add support for hardware updates of the
>>>>>>>> access and dirty pte bits") and the initial check is as follow:
>>>>>>>> 
>>>>>>>> BUG_ON(pte_write(*ptep) && !pte_dirty(pte));
>>>>>>>> 
>>>>>>>> Since we do need to mark this pte as read-only to catch illegal write
>>>>>>>> operation to the tail pages, use set_pte  to replace set_pte_at to bypass
>>>>>>>> this check.
>>>>>>> 
>>>>>>> In theory, the waring does not affect anything since the tail vmemmap
>>>>>>> pages are supposed to be read-only. So, skipping this check for vmemmap
>>>>>> 
>>>>>> Tails vmemmap pages are supposed to be read-only, in practice but their
>>>>>> backing pages do have pte_write() enabled. Otherwise the VM_WARN_ONCE()
>>>>>> warning would not have triggered.
>>>>> 
>>>>> Right.
>>>>> 
>>>>>> 
>>>>>>      VM_WARN_ONCE(pte_write(old_pte) && !pte_dirty(pte),
>>>>>>                   "%s: racy dirty state clearing: 0x%016llx -> 0x%016llx",
>>>>>>                   __func__, pte_val(old_pte), pte_val(pte));
>>>>>> 
>>>>>> Also, is not it true that the pte being remapped into a different page
>>>>>> as read only, than what it had originally (which will be freed up) i.e 
>>>>>> the PFN in 'old_pte' and 'pte' will be different. Hence is there still
>>>>> 
>>>>> Right.
>>>>> 
>>>>>> a possibility for a race condition even when the PFN changes ?
>>>>> 
>>>>> Sorry, I didn't get this question. Did you mean the PTE is changed from
>>>>> new (pte) to the old one (old_pte) by the hardware because of the update
>>>>> of dirty bit when a concurrent write operation to the tail vmemmap page?
>>>> 
>>>> No, but is not vmemmap_remap_pte() reuses walk->reuse_page for all remaining
>>>> tails pages ? Is not there a PFN change, along with access permission change
>>>> involved in this remapping process ?
>>> 
>>> For the record, as we discussed offline, changing the output address
>>> (pfn) of a pte is not safe without break-before-make if at least one of
>>> the mappings was writeable. The caller (vmemmap_remap_pte()) would need
>>> to be fixed to first invalidate the pte and then write the new pte. I
>> 
>> Could you expose more details about what issue it will be caused? I am
>> not familiar with arm64.
> 
> Well, it's not allowed by the architecture, so some CPU implementations
> may do weird things like accessing incorrect memory or triggering TLB
> conflict aborts if, for some reason, they end up with two entries in
> the TLB for the same VA but pointing to different pfns. The hardware
> expects an invalid PTE and TLB invalidation between such changes. In
> practice most likely nothing happens and this works fine but we need to
> stick to the architecture requirements in case some CPUs take advantage
> of this requirement.

Got it. Thanks for your nice explanation.

> 
>>> assume no other CPU accesses this part of the vmemmap while the pte is
>>> being remapped.
>> 
>> However, there is no guarantee that no other CPU accesses this pte.
>> E.g. memory failure or memory compaction, both can obtain head page
>> from any tail struct pages (only read) anytime.
> 
> Oh, so we cannot safely go through a break-before-make sequence here
> (zero the PTE, flush the TLB, write the new PTE) as some CPU may access
> this pte.

Right.

Muchun

> 
> -- 
> Catalin


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ