[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DEF43337-68A2-4FDF-9B8C-795E017831DE@lca.pw>
Date: Tue, 14 Jan 2020 20:02:31 -0500
From: Qian Cai <cai@....pw>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...nel.org>,
David Hildenbrand <david@...hat.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
pmladek@...e.com, rostedt@...dmis.org, peterz@...radead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next] mm/hotplug: silence a lockdep splat with printk()
> On Jan 14, 2020, at 6:53 PM, Andrew Morton <akpm@...ux-foundation.org> wrote:
>
>> On Jan 14, 2020, at 4:02 PM, Michal Hocko <mhocko@...nel.org> wrote:
>>>
>>> Yeah, that was a long discussion with a lot of lockdep false positives.
>>> I believe I have made it clear that the console code shouldn't depend on
>>> memory allocation because that is just too fragile. If that is not
>>> possible for some reason then it has to be mentioned in the changelog.
>>> I really do not want us to add kludges to the MM code just because of
>>> printk deficiencies unless that is absolutely inevitable.
>>
>> I don't know how to convince you, but both random number generator and
>> printk() maintainers agreed to get ride of printk() with zone->lock
>> held as you can see in the approved commit mentioned in this patch
>> description because it is a whac-a-mole to fix other places. In other
>> word, the patch alone fixes quite a few false positives and potential
>> real deadlocks. Maybe Andrew please has a look at this directly?
>>
>
> Well, a few things.
>
> The changelog is quite poor. It doesn't describe the problem (console
> drivers allocating memory) not does it describe the solution
> (deferring the dump_page() until after release of zone->lock).
>
> So I changed it to this:
>
> : Some console drivers can perform memory allocation at inappropriate times,
> : which can result in lockdep warnings (and presumably deadlocks) if printk
> : is called with zone->lock held.
> :
> : By far the best fix is to reeducate those console drivers to not perform
> : these allocations, but this is proving difficult.
… but this is proving difficult because even if we fixed that directly, lockdep
Is still able to find an indirect dependency chain, for example [1]
CPU1: console_owner —> port_lock_key
CPU2: port_lock_key —> (&port->lock)->rlock
CPU3: (&port->lock)->rlock —> zone->lock
which will trigger a splat with
zone->lock —> console_owner
[1] https://lore.kernel.org/linux-mm/1570460350.5576.290.camel@lca.pw/
> :
> : Another but poorer approach is to call printk_deferred() when holding
> : zone->lock, but memory offline will call dump_page() which needs to defer
> : after the lock.
> :
> : So change has_unmovable_pages() so that it no longer calls dump_page()
> : itself - instead it passes the page's descripton (as a string) back to the
> : caller so that in the case of a has_unmovable_pages() failure, the caller
> : can call dump_page() after releasing zone->lock.
> :
> : While at it, remove a similar but unnecessary debug printk() as well.
>
> But I see a couple of other issues.
>
>> @@ -8290,8 +8290,10 @@ bool has_unmovable_pages(struct zone *zo
>> return false;
>> unmovable:
>> WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
>> - if (flags & REPORT_FAILURE)
>> - dump_page(pfn_to_page(pfn + iter), reason);
>> + if (flags & REPORT_FAILURE) {
>> + page = pfn_to_page(pfn + iter);
>
> This statement appears to be unnecessary.
dump_page() in set_migratetype_isolate() needs that “page”.
>
>> + strscpy(dump, reason, 64);
>> + }
>
>
> Also, that whole `reason' thing in has_unmovable_pages() is just there
> to tell us whether it was an "unmovable page" or a "CMA page". This
> doesn't seem terribly useful to me. Also, I expect that the
> dump_page() output will permit the user to determine that it was a CMA
> page anyway. If not, we can change dump_page() to add that info.
>
> So how about we remove that whole `reason' thing and possibly enhance
> dump_page()? The patch then becomes much simpler.
Sounds like a good idea. I’ll send a v2.
Powered by blists - more mailing lists