lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 12 Mar 2021 16:26:36 -0800
From:   Sean Christopherson <seanjc@...gle.com>
To:     "wangyanan (Y)" <wangyanan55@...wei.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Ben Gardon <bgardon@...gle.com>,
        Andrew Jones <drjones@...hat.com>,
        Peter Xu <peterx@...hat.com>,
        Aaron Lewis <aaronlewis@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb)
 for hugepages

On Thu, Feb 25, 2021, wangyanan (Y) wrote:
> 
> On 2021/2/11 7:06, Sean Christopherson wrote:
> > Align the HVA for hugepage memslots to 1gb, as opposed to incorrectly
> > assuming all architectures' hugepages are 512*page_size.
> > 
> > For x86, multiplying by 512 is correct, but only for 2mb pages, e.g.
> > systems that support 1gb pages will never be able to use them for mapping
> > guest memory, and thus those flows will not be exercised.
> > 
> > For arm64, powerpc, and s390 (and mips?), hardcoding the multiplier to
> > 512 is either flat out wrong, or at best correct only in certain
> > configurations.
> > 
> > Hardcoding the _alignment_ to 1gb is a compromise between correctness and
> > simplicity.  Due to the myriad flavors of hugepages across architectures,
> > attempting to enumerate the exact hugepage size is difficult, and likely
> > requires probing the kernel.
> > 
> > But, there is no need for precision since a stronger alignment will not
> > prevent creating a smaller hugepage.  For all but the most extreme cases,
> > e.g. arm64's 16gb contiguous PMDs, aligning to 1gb is sufficient to allow
> > KVM to back the guest with hugepages.
> I have implemented a helper get_backing_src_pagesz() to get granularity of
> different
> backing src types (anonymous/thp/hugetlb) which is suitable for different
> architectures.
> See:
> https://lore.kernel.org/lkml/20210225055940.18748-6-wangyanan55@huawei.com/
> if it looks fine for you, maybe we can use the accurate page sizes for
> GPA/HVA alignment:).

Works for me.  I'll probably just wait until your series is queued to send v2.

Thanks again!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ