[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fbb269c0-e915-a9f2-da3b-5ae3a2b31396@gmail.com>
Date: Thu, 17 May 2018 20:46:50 +0800
From: Jia He <hejianet@...il.com>
To: Suzuki K Poulose <Suzuki.Poulose@....com>,
Christoffer Dall <christoffer.dall@....com>,
Marc Zyngier <marc.zyngier@....com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Claudio Imbrenda <imbrenda@...ux.vnet.ibm.com>,
Arvind Yadav <arvind.yadav.cs@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Minchan Kim <minchan@...nel.org>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Hugh Dickins <hughd@...gle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
jia.he@...-semitech.com
Subject: Re: [PATCH] KVM: arm/arm64: add WARN_ON if size is not PAGE_SIZE
aligned in unmap_stage2_range
Hi Suzuki
On 5/17/2018 4:17 PM, Suzuki K Poulose Wrote:
>
> Hi Jia,
>
> On 17/05/18 07:11, Jia He wrote:
>> I ever met a panic under memory pressure tests(start 20 guests and run
>> memhog in the host).
>
> Please avoid using "I" in the commit description and preferably stick to
> an objective description.
Thanks for the pointing
>
>>
>> The root cause might be what I fixed at [1]. But from arm kvm points of
>> view, it would be better we caught the exception earlier and clearer.
>>
>> If the size is not PAGE_SIZE aligned, unmap_stage2_range might unmap the
>> wrong(more or less) page range. Hence it caused the "BUG: Bad page
>> state"
>
> I don't see why we should ever panic with a "positive" size value. Anyways,
> the unmap requests must be in units of pages. So this check might be useful.
>
>
good question,
After further digging, maybe we need to harden the break condition as below?
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 7f6a944..dac9b2e 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -217,7 +217,7 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd,
put_page(virt_to_page(pte));
}
- } while (pte++, addr += PAGE_SIZE, addr != end);
+ } while (pte++, addr += PAGE_SIZE, addr < end);
basically verified in my armv8a server
--
Cheers,
Jia
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@....com>
>
>>
>> [1] https://lkml.org/lkml/2018/5/3/1042
>>
>> Signed-off-by: jia.he@...-semitech.com
>> ---
>> virt/kvm/arm/mmu.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index 7f6a944..8dac311 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -297,6 +297,8 @@ static void unmap_stage2_range(struct kvm *kvm,
>> phys_addr_t start, u64 size)
>> phys_addr_t next;
>> assert_spin_locked(&kvm->mmu_lock);
>> + WARN_ON(size & ~PAGE_MASK);
>> +
>> pgd = kvm->arch.pgd + stage2_pgd_index(addr);
>> do {
>> /*
>>
>
>
Powered by blists - more mailing lists