[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200518163700.4nn4csjlbpcixmsv@linux-p48b>
Date: Mon, 18 May 2020 09:37:00 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: tglx@...utronix.de, peterz@...radead.org, maz@...nel.org,
bigeasy@...utronix.de, rostedt@...dmis.org,
torvalds@...ux-foundation.org, will@...nel.org,
joel@...lfernandes.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH 4/5] rcuwait: Introduce rcuwait_active()
On Mon, 18 May 2020, Paolo Bonzini wrote:
>On 24/04/20 07:48, Davidlohr Bueso wrote:
>> +/*
>> + * Note: this provides no serialization and, just as with waitqueues,
>> + * requires care to estimate as to whether or not the wait is active.
>> + */
>> +static inline int rcuwait_active(struct rcuwait *w)
>> +{
>> + return !!rcu_dereference(w->task);
>> +}
>
>This needs to be changed to rcu_access_pointer:
>
>
>--------------- 8< -----------------
>From: Paolo Bonzini <pbonzini@...hat.com>
>Subject: [PATCH] rcuwait: avoid lockdep splats from rcuwait_active()
>
>rcuwait_active only returns whether w->task is not NULL. This is
>exactly one of the usecases that are mentioned in the documentation
>for rcu_access_pointer() where it is correct to bypass lockdep checks.
>
>This avoids a splat from kvm_vcpu_on_spin().
>
>Reported-by: Wanpeng Li <kernellwp@...il.com>
>Cc: Peter Zijlstra <peterz@...radead.org>
>Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Acked-by: Davidlohr Bueso <dbueso@...e.de>
Powered by blists - more mailing lists