[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20131214003208.f99bc37c.akpm@linux-foundation.org>
Date: Sat, 14 Dec 2013 00:32:08 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: David Vrabel <david.vrabel@...rix.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Len Brown <lenb@...nel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
<linux-acpi@...r.kernel.org>,
xen-devel <xen-devel@...ts.xenproject.org>,
Dietmar Hahn <dietmar.hahn@...fujitsu.com>
Subject: Re: vunmap() on large regions may trigger soft lockup warnings
On Thu, 12 Dec 2013 12:50:47 +0000 David Vrabel <david.vrabel@...rix.com> wrote:
> > each time. But that would require difficult tuning of N.
> >
> > I suppose we could just do
> >
> > if (!in_interrupt())
> > cond_resched();
> >
> > in vunmap_pmd_range(), but that's pretty specific to ghes.c and doesn't
> > permit unmap-inside-spinlock.
> >
> > So I can't immediately think of a suitable fix apart from adding a new
> > unmap_kernel_range_atomic(). Then add a `bool atomic' arg to
> > vunmap_page_range() and pass that all the way down.
>
> That would work for the unmap, but looking at the GHES driver some more
> and it looks like it's call to ioremap_page_range() is already unsafe --
> it may need to allocate a new PTE page with a non-atomic alloc in
> pte_alloc_one_kernel().
>
> Perhaps what's needed here is a pair of ioremap_page_atomic() and
> iounmap_page_atomic() calls? With some prep function to sure the PTE
> pages (etc.) are preallocated.
Is ghes.c the only problem source here? If so then a suitable solution
would be to declare that driver hopelessly busted and proceed as if it
didn't exist :(
Just from a quick look, the thing is doing ioremap() from NMI context!
ioremap has to do a bunch of memory allocations, takes spinlocks etc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists