[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4hcDNeQO3CfnqTRou+6R5gZwyi4pVUMxp1DadAOp7kJGQ@mail.gmail.com>
Date: Fri, 10 Jan 2020 09:39:10 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
stable <stable@...r.kernel.org>,
Vishal Verma <vishal.l.verma@...el.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Michal Hocko <mhocko@...e.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/memory_hotplug: Fix remove_memory() lockdep splat
On Fri, Jan 10, 2020 at 9:36 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 10.01.20 18:33, Dan Williams wrote:
> > On Fri, Jan 10, 2020 at 9:29 AM David Hildenbrand <david@...hat.com> wrote:
> > [..]
> >>> So then the comment is actively misleading for that case. I would
> >>> expect an explicit _unlocked path for that case with a comment about
> >>> why it's special. Is there already a comment to that effect somewhere?
> >>>
> >>
> >> __add_memory() - the locked variant - is called from the same ACPI location
> >> either locked or unlocked. I added a comment back then after a longe
> >> discussion with Michal:
> >>
> >> drivers/acpi/scan.c:
> >> /*
> >> * Although we call __add_memory() that is documented to require the
> >> * device_hotplug_lock, it is not necessary here because this is an
> >> * early code when userspace or any other code path cannot trigger
> >> * hotplug/hotunplug operations.
> >> */
> >>
> >>
> >> It really is a special case, though.
> >
> > That's a large comment block when we could have just taken the lock.
> > There's probably many other code paths in the kernel where some locks
> > are not necessary before userspace is up, but the code takes the lock
> > anyway to minimize the code maintenance burden. Is there really a
> > compelling reason to be clever here?
>
> It was a lengthy discussion back then and I was sharing your opinion. I
> even had a patch ready to enforce that we are holding the lock (that's
> how I identified that specific case in the first place).
Ok, apologies I missed that opportunity to back you up. Michal, is
this still worth it?
Powered by blists - more mailing lists