[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120421103839.7e17a3e46ccc50629691c997@gmail.com>
Date: Sat, 21 Apr 2012 10:38:39 +0900
From: Takuya Yoshikawa <takuya.yoshikawa@...il.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
Avi Kivity <avi@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH v3 5/9] KVM: MMU: introduce SPTE_WRITE_PROTECT bit
On Fri, 20 Apr 2012 21:55:55 -0300
Marcelo Tosatti <mtosatti@...hat.com> wrote:
> More importantly than the particular flush TLB case, the point is
> every piece of code that reads and writes sptes must now be aware that
> mmu_lock alone does not guarantee stability. Everything must be audited.
In addition, please give me some stress-test cases to verify these in the
real environments. Live migration with KSM, with notifier call, etc?
Although the current logic is verified by dirty-log api test, the new logic
may need another api test program.
Note: the problem is that live migration can fail silently. We cannot know
the data loss is from guest side problem or get_dirty side.
> Where the bulk of the improvement comes from again? If there is little
> or no mmu_lock contention (which we have no consistent data to be honest
> in your testcase) is the bouncing off mmu_lock's cacheline that hurts?
This week, I was doing some simplified "worst-latency-tests" for my work.
It was difficult than I thought.
But Xiao's "lock-less" should see the reduction of mmu_lock contention
more easily, if there is really some.
To make things simple, e.g., we can do the same kind of write-loop as
XBZRLE people are doing in the guest - with more VCPUs if possible.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists