[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230602160914.4011728-4-vipinsh@google.com>
Date: Fri, 2 Jun 2023 09:09:01 -0700
From: Vipin Sharma <vipinsh@...gle.com>
To: maz@...nel.org, oliver.upton@...ux.dev, james.morse@....com,
suzuki.poulose@....com, yuzenghui@...wei.com,
catalin.marinas@....com, will@...nel.org, chenhuacai@...nel.org,
aleksandar.qemu.devel@...il.com, tsbogend@...ha.franken.de,
anup@...infault.org, atishp@...shpatra.org,
paul.walmsley@...ive.com, palmer@...belt.com,
aou@...s.berkeley.edu, seanjc@...gle.com, pbonzini@...hat.com,
dmatlack@...gle.com, ricarkol@...gle.com
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-mips@...r.kernel.org, kvm-riscv@...ts.infradead.org,
linux-riscv@...ts.infradead.org, linux-kselftest@...r.kernel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Vipin Sharma <vipinsh@...gle.com>
Subject: [PATCH v2 03/16] KVM: selftests: Pass the count of read and write
accesses from guest to host
Pass the number of read and write accesses done in the memstress guest
code to userspace.
These counts will provide a way to measure vCPUs performance during
memstress and dirty logging related tests. For example, in
dirty_log_perf_test this can be used to measure how much progress vCPUs
are able to do while VMM is getting and clearing dirty logs.
In dirty_log_perf_test, each vCPU runs once and then waits until
iteration value is incremented by main thread, therefore, these access
counts will not provide much useful information except for observing
read vs write counts.
However, in future commits, dirty_log_perf_test behavior will be changed
to allow vCPUs to execute independent of userspace iterations. This will
mimic real world workload where guest keeps on executing while VMM is
collecting and clearing dirty logs separately. With read and write
accesses known for each vCPU, impact of get and clear dirty log APIs can
be quantified.
Note that access counts will not be 100% reliable in knowing vCPUs
performances. Few things which can affect vCPU progress:
1. vCPUs are scheduled less by host
2. Userspace operations run for longer time which end up giving vCPUs
more time to execute.
Signed-off-by: Vipin Sharma <vipinsh@...gle.com>
---
tools/testing/selftests/kvm/lib/memstress.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index 5f1d3173c238..ac53cc6e36d7 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -49,6 +49,8 @@ void memstress_guest_code(uint32_t vcpu_idx)
struct memstress_args *args = &memstress_args;
struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx];
struct guest_random_state rand_state;
+ uint64_t write_access;
+ uint64_t read_access;
uint64_t gva;
uint64_t pages;
uint64_t addr;
@@ -64,6 +66,8 @@ void memstress_guest_code(uint32_t vcpu_idx)
GUEST_ASSERT(vcpu_args->vcpu_idx == vcpu_idx);
while (true) {
+ write_access = 0;
+ read_access = 0;
for (i = 0; i < pages; i++) {
if (args->random_access)
page = guest_random_u32(&rand_state) % pages;
@@ -72,13 +76,16 @@ void memstress_guest_code(uint32_t vcpu_idx)
addr = gva + (page * args->guest_page_size);
- if (guest_random_u32(&rand_state) % 100 < args->write_percent)
+ if (guest_random_u32(&rand_state) % 100 < args->write_percent) {
*(uint64_t *)addr = 0x0123456789ABCDEF;
- else
+ write_access++;
+ } else {
READ_ONCE(*(uint64_t *)addr);
+ read_access++;
+ }
}
- GUEST_SYNC(1);
+ GUEST_SYNC_ARGS(1, read_access, write_access, 0, 0);
}
}
--
2.41.0.rc0.172.g3f132b7071-goog
Powered by blists - more mailing lists