lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Jun 2022 12:01:08 -0700
From:   Mina Almasry <almasrymina@...gle.com>
To:     James Houghton <jthoughton@...gle.com>
Cc:     Mike Kravetz <mike.kravetz@...cle.com>,
        Muchun Song <songmuchun@...edance.com>,
        Peter Xu <peterx@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        David Rientjes <rientjes@...gle.com>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Jue Wang <juew@...gle.com>,
        Manish Mishra <manish.mishra@...anix.com>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 03/26] hugetlb: add make_huge_pte_with_shift

On Fri, Jun 24, 2022 at 10:37 AM James Houghton <jthoughton@...gle.com> wrote:
>
> This allows us to make huge PTEs at shifts other than the hstate shift,
> which will be necessary for high-granularity mappings.
>

Can you elaborate on why?

> Signed-off-by: James Houghton <jthoughton@...gle.com>
> ---
>  mm/hugetlb.c | 33 ++++++++++++++++++++-------------
>  1 file changed, 20 insertions(+), 13 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 5df838d86f32..0eec34edf3b2 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4686,23 +4686,30 @@ const struct vm_operations_struct hugetlb_vm_ops = {
>         .pagesize = hugetlb_vm_op_pagesize,
>  };
>
> +static pte_t make_huge_pte_with_shift(struct vm_area_struct *vma,
> +                                     struct page *page, int writable,
> +                                     int shift)
> +{
> +       bool huge = shift > PAGE_SHIFT;
> +       pte_t entry = huge ? mk_huge_pte(page, vma->vm_page_prot)
> +                          : mk_pte(page, vma->vm_page_prot);
> +
> +       if (writable)
> +               entry = huge ? huge_pte_mkwrite(entry) : pte_mkwrite(entry);
> +       else
> +               entry = huge ? huge_pte_wrprotect(entry) : pte_wrprotect(entry);
> +       pte_mkyoung(entry);
> +       if (huge)
> +               entry = arch_make_huge_pte(entry, shift, vma->vm_flags);
> +       return entry;
> +}
> +
>  static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
> -                               int writable)
> +                          int writable)

Looks like an unnecessary diff?

>  {
> -       pte_t entry;
>         unsigned int shift = huge_page_shift(hstate_vma(vma));
>
> -       if (writable) {
> -               entry = huge_pte_mkwrite(huge_pte_mkdirty(mk_huge_pte(page,
> -                                        vma->vm_page_prot)));

In this case there is an intermediate call to huge_pte_mkdirty() that
is not done in make_huge_pte_with_shift(). Why was this removed?

> -       } else {
> -               entry = huge_pte_wrprotect(mk_huge_pte(page,
> -                                          vma->vm_page_prot));
> -       }
> -       entry = pte_mkyoung(entry);
> -       entry = arch_make_huge_pte(entry, shift, vma->vm_flags);
> -
> -       return entry;
> +       return make_huge_pte_with_shift(vma, page, writable, shift);

I think this is marginally cleaner to calculate the shift inline:

  return make_huge_pte_with_shift(vma, page, writable,
huge_page_shift(hstate_vma(vma)));

>  }
>
>  static void set_huge_ptep_writable(struct vm_area_struct *vma,
> --
> 2.37.0.rc0.161.g10f37bed90-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ