lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 24 Jun 2024 09:28:36 +0800
From: maobibo <maobibo@...ngson.cn>
To: Huacai Chen <chenhuacai@...nel.org>
Cc: Tianrui Zhao <zhaotianrui@...ngson.cn>, WANG Xuerui <kernel@...0n.name>,
 Sean Christopherson <seanjc@...gle.com>, kvm@...r.kernel.org,
 loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/6] LoongArch: KVM: Select huge page only if secondary
 mmu supports it



On 2024/6/23 下午3:55, Huacai Chen wrote:
> Hi, Bibo,
> 
> On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@...ngson.cn> wrote:
>>
>> Currently page level selection about secondary mmu depends on memory
>> slot and page level about host mmu. There will be problem if page level
>> of secondary mmu is zero already. So page level selection should depend
>> on the following three conditions.
>>   1. Memslot is aligned for huge page and vm is not migrating.
>>   2. Page level of host mmu is huge page also.
>>   3. Page level of secondary mmu is suituable for huge page, it cannot
>> be normal page since it is not supported to merge normal pages into
>> huge page now.
>>
>> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
>> ---
>>   arch/loongarch/include/asm/kvm_mmu.h |  2 +-
>>   arch/loongarch/kvm/mmu.c             | 16 +++++++++++++---
>>   2 files changed, 14 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/loongarch/include/asm/kvm_mmu.h b/arch/loongarch/include/asm/kvm_mmu.h
>> index 099bafc6f797..d06ae0e0dde5 100644
>> --- a/arch/loongarch/include/asm/kvm_mmu.h
>> +++ b/arch/loongarch/include/asm/kvm_mmu.h
>> @@ -55,7 +55,7 @@ static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
>>   static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
>>   static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
>>   static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
>> -static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
>> +static inline int kvm_pte_huge(kvm_pte_t pte)  { return !!(pte & _PAGE_HUGE); }
> Why do we need this change?
In later there is such usage like !kvm_pte_huge(*ptep)
       if (ptep && !kvm_pte_huge(*ptep))

I had thought it should be 0/1 if !kvm_pte_huge() is used. However the 
original is ok by test.

I will remove this modification.

Regards
Bibo Mao


> 
> Huacai
> 
>>
>>   static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
>>   {
>> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
>> index 9e39d28fec35..c6351d13ca1b 100644
>> --- a/arch/loongarch/kvm/mmu.c
>> +++ b/arch/loongarch/kvm/mmu.c
>> @@ -858,10 +858,20 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>>
>>          /* Disable dirty logging on HugePages */
>>          level = 0;
>> -       if (!fault_supports_huge_mapping(memslot, hva, write)) {
>> -               level = 0;
>> -       } else {
>> +       if (fault_supports_huge_mapping(memslot, hva, write)) {
>> +               /* Check page level about host mmu*/
>>                  level = host_pfn_mapping_level(kvm, gfn, memslot);
>> +               if (level == 1) {
>> +                       /*
>> +                        * Check page level about secondary mmu
>> +                        * Disable hugepage if it is normal page on
>> +                        * secondary mmu already
>> +                        */
>> +                       ptep = kvm_populate_gpa(kvm, NULL, gpa, 0);
>> +                       if (ptep && !kvm_pte_huge(*ptep))
>> +                               level = 0;
>> +               }
>> +
>>                  if (level == 1) {
>>                          gfn = gfn & ~(PTRS_PER_PTE - 1);
>>                          pfn = pfn & ~(PTRS_PER_PTE - 1);
>> --
>> 2.39.3
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ