lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com>
Date:   Mon, 17 Jul 2023 14:36:03 +0100
From:   Ryan Roberts <ryan.roberts@....com>
To:     Yu Zhao <yuzhao@...gle.com>, Hugh Dickins <hughd@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Yin Fengwei <fengwei.yin@...el.com>,
        David Hildenbrand <david@...hat.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Yang Shi <shy828301@...il.com>,
        "Huang, Ying" <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
        Luis Chamberlain <mcgrof@...nel.org>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance

>>>> +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio)
>>>> +{
>>>> +       int i;
>>>> +       gfp_t gfp;
>>>> +       pte_t *pte;
>>>> +       unsigned long addr;
>>>> +       struct vm_area_struct *vma = vmf->vma;
>>>> +       int prefer = anon_folio_order(vma);
>>>> +       int orders[] = {
>>>> +               prefer,
>>>> +               prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_ORDER : 0,
>>>> +               0,
>>>> +       };
>>>> +
>>>> +       *folio = NULL;
>>>> +
>>>> +       if (vmf_orig_pte_uffd_wp(vmf))
>>>> +               goto fallback;
>>>> +
>>>> +       for (i = 0; orders[i]; i++) {
>>>> +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]);
>>>> +               if (addr >= vma->vm_start &&
>>>> +                   addr + (PAGE_SIZE << orders[i]) <= vma->vm_end)
>>>> +                       break;
>>>> +       }
>>>> +
>>>> +       if (!orders[i])
>>>> +               goto fallback;
>>>> +
>>>> +       pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK);
>>>> +       if (!pte)
>>>> +               return -EAGAIN;
>>>
>>> It would be a bug if this happens. So probably -EINVAL?
>>
>> Not sure what you mean? Hugh Dickins' series that went into v6.5-rc1 makes it
>> possible for pte_offset_map() to fail (if I understood correctly) and we have to
>> handle this. The intent is that we will return from the fault without making any
>> change, then we will refault and try again.
> 
> Thanks for checking that -- it's very relevant. One detail is that
> that series doesn't affect anon. IOW, collapsing PTEs into a PMD can't
> happen while we are holding mmap_lock for read here, and therefore,
> the race that could cause pte_offset_map() on shmem/file PTEs to fail
> doesn't apply here.

But Hugh's patches have changed do_anonymous_page() to handle failure from
pte_offset_map_lock(). So I was just following that pattern. If this really
can't happen, then I'd rather WARN/BUG on it, and simplify alloc_anon_folio()'s
prototype to just return a `struct folio *` (and if it's null that means ENOMEM).

Hugh, perhaps you can comment?

As an aside, it was my understanding from LWN, that we are now using a per-VMA
lock so presumably we don't hold mmap_lock for read here? Or perhaps that only
applies to file-backed memory?

> 
> +Hugh Dickins for further consultation if you need it.
> 
>>>> +
>>>> +       for (; orders[i]; i++) {
>>>> +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]);
>>>> +               vmf->pte = pte + pte_index(addr);
>>>> +               if (!vmf_pte_range_changed(vmf, 1 << orders[i]))
>>>> +                       break;
>>>> +       }
>>>> +
>>>> +       vmf->pte = NULL;
>>>> +       pte_unmap(pte);
>>>> +
>>>> +       gfp = vma_thp_gfp_mask(vma);
>>>> +
>>>> +       for (; orders[i]; i++) {
>>>> +               addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]);
>>>> +               *folio = vma_alloc_folio(gfp, orders[i], vma, addr, true);
>>>> +               if (*folio) {
>>>> +                       clear_huge_page(&(*folio)->page, addr, 1 << orders[i]);
>>>> +                       return 0;
>>>> +               }
>>>> +       }
>>>> +
>>>> +fallback:
>>>> +       *folio = vma_alloc_zeroed_movable_folio(vma, vmf->address);
>>>> +       return *folio ? 0 : -ENOMEM;
>>>> +}
>>>> +#else
>>>> +static inline int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio)
>>>
>>> Drop "inline" (it doesn't do anything in .c).
>>
>> There are 38 instances of inline in memory.c alone, so looks like a well used
>> convention, even if the compiler may choose to ignore. Perhaps you can educate
>> me; what's the benefit of dropping it?
> 
> I'll let Willy and Andrew educate both of us :)
> 
> +Matthew Wilcox +Andrew Morton please. Thank you.
> 
>>> The rest looks good to me.
>>
>> Great - just incase it wasn't obvious, I decided not to overwrite vmf->address
>> with the aligned version, as you suggested
> 
> Yes, I've noticed. Not overwriting has its own merits for sure.
> 
>> for 2 reasons; 1) address is const
>> in the struct, so would have had to change that. 2) there is a uffd path that
>> can be taken after the vmf->address fixup would have occured and the path
>> consumes that member, so it would have had to be un-fixed-up making it more
>> messy than the way I opted for.
>>
>> Thanks for the quick review as always!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ