[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <adcbac67-82b5-98a4-efb4-61c9ed870c15@linux.ibm.com>
Date: Mon, 8 Apr 2019 09:12:28 +0200
From: Thomas-Mich Richter <tmricht@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Kees Cook <keescook@...omium.org>, acme@...hat.com,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Hendrik Brueckner <brueckner@...ux.ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: Re: WARN_ON_ONCE() hit at kernel/events/core.c:330
On 4/4/19 3:03 PM, Peter Zijlstra wrote:
> On Thu, Apr 04, 2019 at 01:09:09PM +0200, Peter Zijlstra wrote:
>
>> That is not entirely the scenario I talked about, but *groan*.
>>
>> So what I meant was:
>>
>> CPU-0 CPU-n
>>
>> __schedule()
>> local_irq_disable()
>>
>> ...
>> deactivate_task(prev);
>>
>> try_to_wake_up(@p)
>> ...
>> smp_cond_load_acquire(&p->on_cpu, !VAL);
>>
>> <PMI>
>> ..
>> perf_event_disable_inatomic()
>> event->pending_disable = 1;
>> irq_work_queue() /* self-IPI */
>> </PMI>
>>
>> context_switch()
>> prepare_task_switch()
>> perf_event_task_sched_out()
>> // the above chain that clears pending_disable
>>
>> finish_task_switch()
>> finish_task()
>> smp_store_release(prev->on_cpu, 0);
>> /* finally.... */
>> // take woken
>> // context_switch to @p
>> finish_lock_switch()
>> raw_spin_unlock_irq()
>> /* w00t, IRQs enabled, self-IPI time */
>> <self-IPI>
>> perf_pending_event()
>> // event->pending_disable == 0
>> </self-IPI>
>>
>>
>> What you're suggesting, is that the time between:
>>
>> smp_store_release(prev->on_cpu, 0);
>>
>> and
>>
>> <self-IPI>
>>
>> on CPU-0 is sufficient for CPU-n to context switch to the task, enable
>> the event there, trigger a PMI that calls perf_event_disable_inatomic()
>> _again_ (this would mean irq_work_queue() failing, which we don't check)
>> (and schedule out again, although that's not required).
>>
>> This being virt that might actually be possible if (v)CPU-0 takes a nap
>> I suppose.
>>
>> Let me think about this a little more...
>
> Does the below cure things? It's not exactly pretty, but it could just
> do the trick.
>
> ---
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index dfc4bab0b02b..d496e6911442 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2009,8 +2009,8 @@ event_sched_out(struct perf_event *event,
> event->pmu->del(event, 0);
> event->oncpu = -1;
>
> - if (event->pending_disable) {
> - event->pending_disable = 0;
> + if (event->pending_disable == smp_processor_id()) {
> + event->pending_disable = -1;
> state = PERF_EVENT_STATE_OFF;
> }
> perf_event_set_state(event, state);
> @@ -2198,7 +2198,7 @@ EXPORT_SYMBOL_GPL(perf_event_disable);
>
> void perf_event_disable_inatomic(struct perf_event *event)
> {
> - event->pending_disable = 1;
> + event->pending_disable = smp_processor_id();
> irq_work_queue(&event->pending);
> }
>
> @@ -5822,8 +5822,8 @@ static void perf_pending_event(struct irq_work *entry)
> * and we won't recurse 'further'.
> */
>
> - if (event->pending_disable) {
> - event->pending_disable = 0;
> + if (event->pending_disable == smp_processor_id()) {
> + event->pending_disable = -1;
> perf_event_disable_local(event);
> }
>
> @@ -10236,6 +10236,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
>
>
> init_waitqueue_head(&event->waitq);
> + event->pending_disable = -1;
> init_irq_work(&event->pending, perf_pending_event);
>
> mutex_init(&event->mmap_mutex);
>
Peter,
very good news, your fix ran over the weekend without any hit!!!
Thanks very much for your help. Do you submit this patch to the kernel mailing list?
--
Thomas Richter, Dept 3252, IBM s390 Linux Development, Boeblingen, Germany
--
Vorsitzender des Aufsichtsrats: Matthias Hartmann
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
Powered by blists - more mailing lists