[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200504183832.GL8135@suse.de>
Date: Mon, 4 May 2020 20:38:32 +0200
From: Joerg Roedel <jroedel@...e.de>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Borislav Petkov <bp@...en8.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Shile Zhang <shile.zhang@...ux.alibaba.com>,
Andy Lutomirski <luto@...capital.net>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Tzvetomir Stoyanov <tz.stoyanov@...il.com>
Subject: Re: [PATCH] percpu: Sync vmalloc mappings in pcpu_alloc() and
free_percpu()
On Mon, May 04, 2020 at 01:40:42PM -0400, Steven Rostedt wrote:
> Seems that your patch caused a lockdep splat on my box:
>
> ========================================================
> WARNING: possible irq lock inversion dependency detected
> 5.7.0-rc3-test+ #249 Not tainted
> --------------------------------------------------------
> swapper/4/0 just changed the state of lock:
> ffff9a580fdd75a0 (&ndev->lock){++.-}-{2:2}, at: mld_ifc_timer_expire+0x3c/0x350
> but this lock took another, SOFTIRQ-unsafe lock in the past:
> (pgd_lock){+.+.}-{2:2}
>
>
> and interrupts could create inverse lock ordering between them.
>
>
> other info that might help us debug this:
> Possible interrupt unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(pgd_lock);
> local_irq_disable();
> lock(&ndev->lock);
> lock(pgd_lock);
> <Interrupt>
> lock(&ndev->lock);
>
> *** DEADLOCK ***
Fair point, but this just shows how problematic it is to call something
like vmalloc_sync_mappings() from a lower-level kernel API function.
The obvious fix for this would be to make pgd_lock irq-safe, but this is
getting more and more ridiculous.
I know you don't like to have a vmalloc_sync_mappings() call in the
tracing code, but can you live with it until we get rid of this broken
interface?
My plan for this is to use a small bitmap to track in the vmalloc and
the (x86-)ioremap code at which levels of the page-tables the code made
changes and combine that with an architecture-dependend mask to decide
whether anything needs to be synced.
On x86-64 the sync would be necessary at most 64 times after boot, so I
think this will only have a very small performance impact, even with
VMAP_STACKS. And as a bonus it would also get rid of vmalloc faulting on
x86, fixing the issue with tracing too.
Regards,
Joerg
Powered by blists - more mailing lists