lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220605064342.309219-6-jiangshanlai@gmail.com>
Date:   Sun,  5 Jun 2022 14:43:35 +0800
From:   Lai Jiangshan <jiangshanlai@...il.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Maxim Levitsky <mlevitsk@...hat.com>,
        Lai Jiangshan <jiangshan.ljs@...group.com>
Subject: [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk()

From: Lai Jiangshan <jiangshan.ljs@...group.com>

mmu_unsync_walk() and __mmu_unsync_walk() requires the caller to clear
unsync for the shadow pages in the resulted pvec by synching them or
zapping them.

All callers does so.

Otherwise mmu_unsync_walk() and __mmu_unsync_walk() can't work because
they always walk from the beginning.

It is possible to make mmu_unsync_walk() and __mmu_unsync_walk() lists
unsync shadow pages in the resulted pvec without needing synching them
or zapping them later.  It would require to change mmu_unsync_walk()
and __mmu_unsync_walk() and make it walk from the last visited position
derived from the resulted pvec of the previous call of mmu_unsync_walk().

It would complicate the walk and no callers require the possible new
behavior.

It is better to keep the original behavior.

Since the shadow pages in the resulted pvec will be synced or zapped,
and clear_unsync_child_bit() for parents will be called anyway later.

Call clear_unsync_child_bit() earlier and directly in __mmu_unsync_walk()
to make the code more efficient (the memory of the shadow pages is hot
in the CPU cache, and no need to visit the shadow pages again later).

Signed-off-by: Lai Jiangshan <jiangshan.ljs@...group.com>
---
 arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f35fd5c59c38..2446ede0b7b9 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1794,19 +1794,23 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 				return -ENOSPC;
 
 			ret = __mmu_unsync_walk(child, pvec);
-			if (!ret) {
-				clear_unsync_child_bit(sp, i);
-				continue;
-			} else if (ret > 0) {
-				nr_unsync_leaf += ret;
-			} else
+			if (ret < 0)
 				return ret;
-		} else if (child->unsync) {
+			nr_unsync_leaf += ret;
+		}
+
+		/*
+		 * Clear unsync bit for @child directly if @child is fully
+		 * walked and all the unsync shadow pages descended from
+		 * @child (including itself) are added into @pvec, the caller
+		 * must sync or zap all the unsync shadow pages in @pvec.
+		 */
+		clear_unsync_child_bit(sp, i);
+		if (child->unsync) {
 			nr_unsync_leaf++;
 			if (mmu_pages_add(pvec, child, i))
 				return -ENOSPC;
-		} else
-			clear_unsync_child_bit(sp, i);
+		}
 	}
 
 	return nr_unsync_leaf;
-- 
2.19.1.6.gb485710b

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ