[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52348289-5d4d-f4a4-6fe3-f0c24cc6d9f9@redhat.com>
Date: Thu, 15 Jul 2021 07:58:28 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Jason Wang <jasowang@...hat.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
"Michael S. Tsirkin" <mst@...hat.com>,
Juri Lelli <jlelli@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>,
He Zhe <zhe.he@...driver.com>
Subject: Re: 5.13-rt1 + KVM = WARNING: at fs/eventfd.c:74 eventfd_signal()
On 15/07/21 06:14, Jason Wang wrote:
>> This obviously does not fly with PREEMPT_RT. If eventfd_signal is
>> preempted and an unrelated thread calls eventfd_signal, the result is
>> a spurious WARN. To avoid this, protect the percpu variable with a
>> local_lock.
>
> But local_lock only disable migration not preemption.
On mainline PREEMPT_RT, local_lock is an array of per-CPU spinlocks.
When two eventfd_signals run on the same CPU and one is preempted, the
spinlocks avoid that the second sees eventfd_wake_count > 0.
Thanks,
Paolo
> Or anything I missed here?
>
> Thanks
>
>
>>
>> Reported-by: Daniel Bristot de Oliveira <bristot@...hat.com>
>> Fixes: b5e683d5cab8 ("eventfd: track eventfd_signal() recursion depth")
>> Cc: stable@...r.kernel.org
>> Cc: He Zhe <zhe.he@...driver.com>
>> Cc: Jens Axboe <axboe@...nel.dk>
>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>>
>> diff --git a/fs/eventfd.c b/fs/eventfd.c
>> index e265b6dd4f34..7d27b6e080ea 100644
>> --- a/fs/eventfd.c
>> +++ b/fs/eventfd.c
>> @@ -12,6 +12,7 @@
>> #include <linux/fs.h>
>> #include <linux/sched/signal.h>
>> #include <linux/kernel.h>
>> +#include <linux/local_lock.h>
>> #include <linux/slab.h>
>> #include <linux/list.h>
>> #include <linux/spinlock.h>
>> @@ -25,6 +26,7 @@
>> #include <linux/idr.h>
>> #include <linux/uio.h>
>>
>> +static local_lock_t eventfd_wake_lock =
>> INIT_LOCAL_LOCK(eventfd_wake_lock);
>> DEFINE_PER_CPU(int, eventfd_wake_count);
>>
>> static DEFINE_IDA(eventfd_ida);
>> @@ -71,8 +73,11 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
>> * it returns true, the eventfd_signal() call should be deferred
>> to a
>> * safe context.
>> */
>> - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count)))
>> + local_lock(&eventfd_wake_lock);
>> + if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) {
>> + local_unlock(&eventfd_wake_lock);
>> return 0;
>> + }
>>
>> spin_lock_irqsave(&ctx->wqh.lock, flags);
>> this_cpu_inc(eventfd_wake_count);
>> @@ -83,6 +88,7 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
>> wake_up_locked_poll(&ctx->wqh, EPOLLIN);
>> this_cpu_dec(eventfd_wake_count);
>> spin_unlock_irqrestore(&ctx->wqh.lock, flags);
>> + local_unlock(&eventfd_wake_lock);
>>
>> return n;
>> }
>>
>
Powered by blists - more mailing lists