[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201113110952.68086-2-tsbogend@alpha.franken.de>
Date: Fri, 13 Nov 2020 12:09:50 +0100
From: Thomas Bogendoerfer <tsbogend@...ha.franken.de>
To: Huacai Chen <chenhc@...ote.com>,
Aleksandar Markovic <aleksandar.qemu.devel@...il.com>,
linux-mips@...r.kernel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/4] MIPS: kvm: Use vm_get_page_prot to get protection bits
MIPS protection bits are setup during runtime so using defines like
PAGE_SHARED ignores this runtime changes. Using vm_get_page_prot
to get correct page protection fixes this.
Signed-off-by: Thomas Bogendoerfer <tsbogend@...ha.franken.de>
---
arch/mips/kvm/mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index 28c366d307e7..3dabeda82458 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -1074,6 +1074,7 @@ int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
{
kvm_pfn_t pfn;
pte_t *ptep;
+ pgprot_t prot;
ptep = kvm_trap_emul_pte_for_gva(vcpu, badvaddr);
if (!ptep) {
@@ -1083,7 +1084,8 @@ int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
pfn = PFN_DOWN(virt_to_phys(vcpu->arch.kseg0_commpage));
/* Also set valid and dirty, so refill handler doesn't have to */
- *ptep = pte_mkyoung(pte_mkdirty(pfn_pte(pfn, PAGE_SHARED)));
+ prot = vm_get_page_prot(VM_READ|VM_WRITE|VM_SHARED);
+ *ptep = pte_mkyoung(pte_mkdirty(pfn_pte(pfn, prot)));
/* Invalidate this entry in the TLB, guest kernel ASID only */
kvm_mips_host_tlb_inv(vcpu, badvaddr, false, true);
--
2.16.4
Powered by blists - more mailing lists