[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1431499348-25188-1-git-send-email-guangrong.xiao@linux.intel.com>
Date: Wed, 13 May 2015 14:42:18 +0800
From: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
To: pbonzini@...hat.com
Cc: gleb@...nel.org, mtosatti@...hat.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>
Subject: [PATCH v3 00/10] KVM: MTRR fixes and some cleanups
Changelog in v3:
thanks for Paolo's comment:
- do not apply for_each_rmap_spte to kvm_zap_rmapp and kvm_mmu_unlink_parents
- fix a cosmetic issue in slot_handle_level_range
- introduce PT_MAX_HUGEPAGE_LEVEL to clean up the code
- improve code Indentation
Changelog in v2:
- fix the bit description in changelog of the first patch, thanks
David Matlack for pointing it out
all follow changes are from Paolo's comment and really appreciate it:
- reorder the whole patchset to make it is more readable
- redesign the iterator APIs
- make TLB clean if @lock_flush_tlb is true in slot_handle_level()
- make MTRR update be generic
This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU:
- In current code, whenever guest MTRR registers are changed
kvm_mmu_reset_context is called to switch to the new root shadow page
table, however, it's useless since:
1) the cache type is not cached into shadow page's attribute so that the
original root shadow page will be reused
2) the cache type is set on the last spte, that means we should sync the
last sptes when MTRR is changed
We can fix it by dropping all the spte in the gfn range which is
being updated by MTRR
- some bugs are in get_mtrr_type();
1: bit 1 of mtrr_state->enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE
MSR which completely control MTRR's enablement that means other bits are
ignored if it is cleared
2: the fixed MTRR ranges are controlled by bit 0 of mtrr_state->enabled (bit
10 of IA32_MTRR_DEF_TYPE)
3: if MTRR is disabled, UC is applied to all of physical memory rather than
mtrr_state->def_type
- we need not to reset mmu once cache policy is changed since shadow page table
does not virtualize any cache policy
Also, these are some cleanups to make current MMU code more cleaner and help
us fixing the bug more easier.
Xiao Guangrong (10):
KVM: MMU: fix decoding cache type from MTRR
KVM: MMU: introduce for_each_rmap_spte()
KVM: MMU: introduce PT_MAX_HUGEPAGE_LEVEL
KVM: MMU: introduce for_each_slot_rmap_range
KVM: MMU: introduce slot_handle_level_range() and its helpers
KVM: MMU: use slot_handle_level and its helper to clean up the code
KVM: MMU: introduce kvm_zap_rmapp
KVM: MMU: introduce kvm_zap_gfn_range
KVM: MMU: fix MTRR update
KVM: x86: do not reset mmu if CR0.CD and CR0.NW are changed
arch/x86/kvm/mmu.c | 409 +++++++++++++++++++++++++----------------------
arch/x86/kvm/mmu.h | 2 +
arch/x86/kvm/mmu_audit.c | 4 +-
arch/x86/kvm/x86.c | 62 ++++++-
4 files changed, 281 insertions(+), 196 deletions(-)
--
2.1.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists