[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1540548616.19739.1629409956315.JavaMail.zimbra@efficios.com>
Date: Thu, 19 Aug 2021 17:52:36 -0400 (EDT)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: "Russell King, ARM Linux" <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Guo Ren <guoren@...nel.org>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Michael Ellerman <mpe@...erman.id.au>,
Heiko Carstens <hca@...ux.ibm.com>, gor <gor@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Oleg Nesterov <oleg@...hat.com>, rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
paulmck <paulmck@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
Paolo Bonzini <pbonzini@...hat.com>, shuah <shuah@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-csky <linux-csky@...r.kernel.org>,
linux-mips@...r.kernel.org,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
linux-s390@...r.kernel.org, KVM list <kvm@...r.kernel.org>,
linux-kselftest <linux-kselftest@...r.kernel.org>,
Peter Foley <pefoley@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH 4/5] KVM: selftests: Add a test for KVM_RUN+rseq to
detect task migration bugs
----- On Aug 17, 2021, at 8:12 PM, Sean Christopherson seanjc@...gle.com wrote:
> Add a test to verify an rseq's CPU ID is updated correctly if the task is
> migrated while the kernel is handling KVM_RUN. This is a regression test
> for a bug introduced by commit 72c3c0fe54a3 ("x86/kvm: Use generic xfer
> to guest work function"), where TIF_NOTIFY_RESUME would be cleared by KVM
> without updating rseq, leading to a stale CPU ID and other badness.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
[...]
> +
> +static void *migration_worker(void *ign)
> +{
> + cpu_set_t allowed_mask;
> + int r, i, nr_cpus, cpu;
> +
> + CPU_ZERO(&allowed_mask);
> +
> + nr_cpus = CPU_COUNT(&possible_mask);
> +
> + for (i = 0; i < 20000; i++) {
> + cpu = i % nr_cpus;
> + if (!CPU_ISSET(cpu, &possible_mask))
> + continue;
> +
> + CPU_SET(cpu, &allowed_mask);
> +
> + r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);
> + TEST_ASSERT(!r, "sched_setaffinity failed, errno = %d (%s)", errno,
> + strerror(errno));
> +
> + CPU_CLR(cpu, &allowed_mask);
> +
> + usleep(10);
> + }
> + done = true;
> + return NULL;
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + struct kvm_vm *vm;
> + u32 cpu, rseq_cpu;
> + int r;
> +
> + /* Tell stdout not to buffer its content */
> + setbuf(stdout, NULL);
> +
> + r = sched_getaffinity(0, sizeof(possible_mask), &possible_mask);
> + TEST_ASSERT(!r, "sched_getaffinity failed, errno = %d (%s)", errno,
> + strerror(errno));
> +
> + if (CPU_COUNT(&possible_mask) < 2) {
> + print_skip("Only one CPU, task migration not possible\n");
> + exit(KSFT_SKIP);
> + }
> +
> + sys_rseq(0);
> +
> + /*
> + * Create and run a dummy VM that immediately exits to userspace via
> + * GUEST_SYNC, while concurrently migrating the process by setting its
> + * CPU affinity.
> + */
> + vm = vm_create_default(VCPU_ID, 0, guest_code);
> +
> + pthread_create(&migration_thread, NULL, migration_worker, 0);
> +
> + while (!done) {
> + vcpu_run(vm, VCPU_ID);
> + TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
> + "Guest failed?");
> +
> + cpu = sched_getcpu();
> + rseq_cpu = READ_ONCE(__rseq.cpu_id);
> +
> + /*
> + * Verify rseq's CPU matches sched's CPU, and that sched's CPU
> + * is stable. This doesn't handle the case where the task is
> + * migrated between sched_getcpu() and reading rseq, and again
> + * between reading rseq and sched_getcpu(), but in practice no
> + * false positives have been observed, while on the other hand
> + * blocking migration while this thread reads CPUs messes with
> + * the timing and prevents hitting failures on a buggy kernel.
> + */
I think you could get a stable cpu id between sched_getcpu and __rseq_abi.cpu_id
if you add a pthread mutex to protect:
sched_getcpu and __rseq_abi.cpu_id reads
vs
sched_setaffinity calls within the migration thread.
Thoughts ?
Thanks,
Mathieu
> + TEST_ASSERT(rseq_cpu == cpu || cpu != sched_getcpu(),
> + "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);
> + }
> +
> + pthread_join(migration_thread, NULL);
> +
> + kvm_vm_free(vm);
> +
> + sys_rseq(RSEQ_FLAG_UNREGISTER);
> +
> + return 0;
> +}
> --
> 2.33.0.rc1.237.g0d66db33f3-goog
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists