[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyB4RwNekpKNQu_KsGTZCyz2EoZMt0V9+PF=p43EksD_6Q@mail.gmail.com>
Date: Thu, 28 Apr 2022 11:39:35 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
David Woodhouse <dwmw@...zon.co.uk>,
Mingwei Zhang <mizhang@...gle.com>,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: Re: [PATCH v2 6/8] KVM: Fix multiple races in gfn=>pfn cache refresh
On Wed, Apr 27, 2022 at 7:16 PM Sean Christopherson <seanjc@...gle.com> wrote:
> @@ -159,10 +249,23 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
>
The following code of refresh_in_progress is somewhat like mutex.
+ mutex_lock(&gpc->refresh_in_progress); // before write_lock_irq(&gpc->lock);
Is it fit for the intention?
Thanks
Lai
> write_lock_irq(&gpc->lock);
>
> + /*
> + * If another task is refreshing the cache, wait for it to complete.
> + * There is no guarantee that concurrent refreshes will see the same
> + * gpa, memslots generation, etc..., so they must be fully serialized.
> + */
> + while (gpc->refresh_in_progress) {
> + write_unlock_irq(&gpc->lock);
> +
> + cond_resched();
> +
> + write_lock_irq(&gpc->lock);
> + }
> + gpc->refresh_in_progress = true;
> +
> old_pfn = gpc->pfn;
> old_khva = gpc->khva - offset_in_page(gpc->khva);
> old_uhva = gpc->uhva;
> - old_valid = gpc->valid;
>
Powered by blists - more mailing lists