[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1rH2uSEa3tMNhCG@google.com>
Date: Thu, 27 Oct 2022 18:03:06 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: "Wang, Wei W" <wei.w.wang@...el.com>
Cc: "pbonzini@...hat.com" <pbonzini@...hat.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>,
"vipinsh@...gle.com" <vipinsh@...gle.com>,
"ajones@...tanamicro.com" <ajones@...tanamicro.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"ikalvarado@...gle.com" <ikalvarado@...gle.com>
Subject: Re: [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code
consolidation and cleanup
On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 8:16 AM, Sean Christopherson wrote:
> > > diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c
> > > static void run_test(uint32_t run)
> > > {
> > > struct kvm_vcpu *vcpu;
> > > struct kvm_vm *vm;
> > > cpu_set_t cpu_set;
> > > - pthread_t threads[VCPU_NUM];
> > > pthread_t throw_away;
> > > - void *b;
> > > + pthread_attr_t attr;
> > > uint32_t i, j;
> > > + int r;
> > >
> > > CPU_ZERO(&cpu_set);
> > > for (i = 0; i < VCPU_NUM; i++)
> > > CPU_SET(i, &cpu_set);
> >
> > Uh, what is this test doing? I assume the intent is to avoid spamming all
> > pCPUs in the system, but I don't get the benefit of doing so.
>
> IIUIC, it is to test if the condition race between the 2 paths:
> #1 kvm_arch_hardware_disable->drop_user_return_notifiers() and
> #2 fire_user_return_notifiers->kvm_on_user_return
> has been solved by disabling interrupts in kvm_on_user_return.
>
> To stress the tests, it creates a bunch of threads (continuously making syscalls
> to trigger #2 above) to be scheduled on the same pCPU that runs a vCPU, and
> then VM is killed, which triggers #1 above.
> They fork to test 512 times hoping there is chance #1 and #2 above can happen
> at the same time without an issue.
But why does it matter what pCPU a vCPU is running on? Wouldn't the probability
of triggering a race between kvm_on_user_return() and hardware_disable() be
_higher_ if there are more pCPUs returning to userspace?
Powered by blists - more mailing lists