[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0e448ae0-af4c-3f0a-2dd5-6ab86c0d60c0@arm.com>
Date: Thu, 7 May 2020 12:01:28 +0100
From: Suzuki K Poulose <suzuki.poulose@....com>
To: giangyi@...zon.com, maz@...nel.org
Cc: james.morse@....com, julien.thierry.kdev@...il.com,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: arm/arm64: release kvm->mmu_lock in loop to prevent
starvation
On 04/15/2020 09:42 AM, Jiang Yi wrote:
> Do cond_resched_lock() in stage2_flush_memslot() like what is done in
> unmap_stage2_range() and other places holding mmu_lock while processing
> a possibly large range of memory.
>
> Signed-off-by: Jiang Yi <giangyi@...zon.com>
> ---
> virt/kvm/arm/mmu.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index e3b9ee268823..7315af2c52f8 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -417,16 +417,19 @@ static void stage2_flush_memslot(struct kvm *kvm,
> phys_addr_t next;
> pgd_t *pgd;
>
> pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr);
> do {
> next = stage2_pgd_addr_end(kvm, addr, end);
> if (!stage2_pgd_none(kvm, *pgd))
> stage2_flush_puds(kvm, pgd, addr, next);
> +
> + if (next != end)
> + cond_resched_lock(&kvm->mmu_lock);
> } while (pgd++, addr = next, addr != end);
> }
Given that this is called under the srcu_lock this looks
good to me:
Reviewed-by: Suzuki K Poulose <suzuki.poulose@....com>
>
> /**
> * stage2_flush_vm - Invalidate cache for pages mapped in stage 2
> * @kvm: The struct kvm pointer
> *
> * Go through the stage 2 page tables and invalidate any cache lines
>
Powered by blists - more mailing lists