[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZbvVuciX3HENjxQi@google.com>
Date: Thu, 1 Feb 2024 09:32:41 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Shaoqin Huang <shahuang@...hat.com>
Cc: kvm@...r.kernel.org, kvmarm@...ts.linux.dev,
Paolo Bonzini <pbonzini@...hat.com>, Shuah Khan <shuah@...nel.org>, linux-kselftest@...r.kernel.org,
linux-kernel@...r.kernel.org, Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH v2] KVM: selftests: Fix the dirty_log_test semaphore imbalance
On Thu, Feb 01, 2024, Shaoqin Huang wrote:
> > > /*
> > > * We reserve page table for 2 times of extra dirty mem which
> > > * will definitely cover the original (1G+) test range. Here
> > > @@ -825,6 +832,13 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> > > sync_global_to_guest(vm, iteration);
> > > }
> > > + /*
> > > + *
> > > + * Before we set the host_quit, let the vcpu has time to run, to make
> > > + * sure we consume the sem_vcpu_stop and the vcpu consume the
> > > + * sem_vcpu_cont, to keep the semaphore balance.
> > > + */
> > > + usleep(p->interval * 1000);
> >
> > Please no. "Wait for a while" is never a complete solution for fixing races.
> > In rare cases, adding a delay might be the only sane workaround, but I doubt that's
> > the case here.
>
> If that's the case. I guess I should keep the current solution. Except you
> have any better solution, please let me know.
Unfortunately I don't have a better solution, and I don't have cycles to stare
at this deeply to figure out what how to make the synchronization rock solid.
Sorry :-/
Powered by blists - more mailing lists