[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121220233545.95378d066b2ce61a76106a88@gmail.com>
Date: Thu, 20 Dec 2012 23:35:45 +0900
From: Takuya Yoshikawa <takuya.yoshikawa@...il.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: Gleb Natapov <gleb@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Takuya Yoshikawa <yoshikawa_takuya_b1@....ntt.co.jp>,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start
dirty logging
On Thu, 20 Dec 2012 06:41:27 -0700
Alex Williamson <alex.williamson@...hat.com> wrote:
> Hmm, isn't the fix as simple as:
>
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -847,7 +847,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
> GFP_KERNEL);
> if (!slots)
> goto out_free;
> - }
> + } else
> + slots->generation = kvm->memslots->generation;
>
> /* map new memory slot into the iommu */
> if (npages) {
>
> Or even just slots->generation++ since we're holding the lock across all
> of this.
Yes, the fix should work, but I do not want to update the
generation from outside of update_memslots().
> The original patch can be reverted, there are no following dependencies,
> but the idea was that we're making the memslot array larger, so there
> could be more pressure in allocating it, so let's not trivially do extra
> frees and allocs. Thanks,
I agree that the current set_memory_region() is not good for frequent updates.
But the alloc/free is not a major factor at the moment: flushing shadows should
be more problematic.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists