[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121220140232.67733085.yoshikawa_takuya_b1@lab.ntt.co.jp>
Date: Thu, 20 Dec 2012 14:02:32 +0900
From: Takuya Yoshikawa <yoshikawa_takuya_b1@....ntt.co.jp>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: Takuya Yoshikawa <takuya.yoshikawa@...il.com>, mtosatti@...hat.com,
gleb@...hat.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] KVM: Alleviate mmu_lock hold time when we start
dirty logging
On Wed, 19 Dec 2012 08:42:57 -0700
Alex Williamson <alex.williamson@...hat.com> wrote:
> Please let me know if you can identify one of these as the culprit.
> They're all very simple, but there's always a chance I've missed a hard
> coding of slot numbers somewhere. Thanks,
I identified the one:
commit b7f69c555ca430129b6cde81e9f0927531420c5c
KVM: Minor memory slot optimization
IIUC, the problem was that you did not care about the generation of
slots which was updated by update_memslots():
Your patch reused the old memory slots which was there before
doing the update for invalidating the slot, and badly, we did flush
shadow pages after that before doing the second update for finally
installing the new slot. As a result, the generation did not change
from that of the invalidated one, although the ghc(gfn to hva cache)
might be stale.
After that, kvm_write_guest_cached() checked if ghc should be
initialized by comparing ghc's generation with that old one,
resulting mark_page_dirty_in_slot() was called with the invalid
cache contents.
Although we can do something to correct the generation alone, I do not
think such a trick is worth it because this is not a hot path. Let's
just revert the patch.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists