lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Mar 2019 17:34:10 +0000
From:   Marc Zyngier <marc.zyngier@....com>
To:     Suzuki K Poulose <Suzuki.Poulose@....com>,
        Zenghui Yu <zenghuiyu96@...il.com>
Cc:     christoffer.dall@....com, Punit Agrawal <punit.agrawal@....com>,
        julien.thierry@....com, LKML <linux-kernel@...r.kernel.org>,
        james.morse@....com, Zenghui Yu <yuzenghui@...wei.com>,
        wanghaibin.wang@...wei.com, kvmarm@...ts.cs.columbia.edu,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC PATCH] KVM: arm64: Force a PTE mapping when logging is
 enabled

Hi Zenghui, Suzuki,

On 04/03/2019 17:13, Suzuki K Poulose wrote:
> Hi Zenghui,
> 
> On Sun, Mar 03, 2019 at 11:14:38PM +0800, Zenghui Yu wrote:
>> I think there're still some problems in this patch... Details below.
>>
>> On Sat, Mar 2, 2019 at 11:39 AM Zenghui Yu <yuzenghui@...wei.com> wrote:
>>>
>>> The idea behind this is: we don't want to keep tracking of huge pages when
>>> logging_active is true, which will result in performance degradation.  We
>>> still need to set vma_pagesize to PAGE_SIZE, so that we can make use of it
>>> to force a PTE mapping.
> 
> Yes, you're right. We are indeed ignoring the force_pte flag.
> 
>>>
>>> Cc: Suzuki K Poulose <suzuki.poulose@....com>
>>> Cc: Punit Agrawal <punit.agrawal@....com>
>>> Signed-off-by: Zenghui Yu <yuzenghui@...wei.com>
>>>
>>> ---
>>> Atfer looking into https://patchwork.codeaurora.org/patch/647985/ , the
>>> "vma_pagesize = PAGE_SIZE" logic was not intended to be deleted. As far
>>> as I can tell, we used to have "hugetlb" to force the PTE mapping, but
>>> we have "vma_pagesize" currently instead. We should set it properly for
>>> performance reasons (e.g, in VM migration). Did I miss something important?
>>>
>>> ---
>>>  virt/kvm/arm/mmu.c | 7 +++++++
>>>  1 file changed, 7 insertions(+)
>>>
>>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>>> index 30251e2..7d41b16 100644
>>> --- a/virt/kvm/arm/mmu.c
>>> +++ b/virt/kvm/arm/mmu.c
>>> @@ -1705,6 +1705,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>>              (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) &&
>>>             !force_pte) {
>>>                 gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
>>> +       } else {
>>> +               /*
>>> +                * Fallback to PTE if it's not one of the stage2
>>> +                * supported hugepage sizes or the corresponding level
>>> +                * doesn't exist, or logging is enabled.
>>
>> First, Instead of "logging is enabled", it should be "force_pte is true",
>> since "force_pte" will be true when:
>>
>>         1) fault_supports_stage2_pmd_mappings() return false; or
>>         2) "logging is enabled" (e.g, in VM migration).
>>
>> Second, fallback some unsupported hugepage sizes (e.g, 64K hugepage with
>> 4K pages) to PTE is somewhat strange. And it will then _unexpectedly_
>> reach transparent_hugepage_adjust(), though no real adjustment will happen
>> since commit fd2ef358282c ("KVM: arm/arm64: Ensure only THP is candidate
>> for adjustment"). Keeping "vma_pagesize" there as it is will be better,
>> right?
>>
>> So I'd just simplify the logic like:
> 
> We could fix this right in the beginning. See patch below:
> 
>>
>>         } else if (force_pte) {
>>                 vma_pagesize = PAGE_SIZE;
>>         }
>>
>>
>> Will send a V2 later and waiting for your comments :)
> 
> 
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 30251e2..529331e 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1693,7 +1693,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		return -EFAULT;
>  	}
>  
> -	vma_pagesize = vma_kernel_pagesize(vma);
> +	/* If we are forced to map at page granularity, force the pagesize here */
> +	vma_pagesize = force_pte ? PAGE_SIZE : vma_kernel_pagesize(vma);
> +
>  	/*
>  	 * The stage2 has a minimum of 2 level table (For arm64 see
>  	 * kvm_arm_setup_stage2()). Hence, we are guaranteed that we can
> @@ -1701,11 +1703,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	 * As for PUD huge maps, we must make sure that we have at least
>  	 * 3 levels, i.e, PMD is not folded.
>  	 */
> -	if ((vma_pagesize == PMD_SIZE ||
> -	     (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm))) &&
> -	    !force_pte) {
> +	if (vma_pagesize == PMD_SIZE ||
> +	    (vma_pagesize == PUD_SIZE && kvm_stage2_has_pmd(kvm)))
>  		gfn = (fault_ipa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT;
> -	}
> +
>  	up_read(&current->mm->mmap_sem);
>  
>  	/* We need minimum second+third level pages */
That's pretty interesting, because this is almost what we already have
in the NV code:

https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/tree/virt/kvm/arm/mmu.c?h=kvm-arm64/nv-wip-v5.0-rc7#n1752

(note that force_pte is gone in that branch).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ