[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6ef554a6-313d-2b17-cee0-14078ed225f6@linux.ibm.com>
Date: Thu, 26 Mar 2020 15:26:22 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, mpe@...erman.id.au,
linuxppc-dev@...ts.ozlabs.org, Baoquan He <bhe@...hat.com>,
Sachin Sant <sachinp@...ux.vnet.ibm.com>
Subject: Re: [PATCH] mm/sparse: Fix kernel crash with pfn_section_valid check
On 3/26/20 3:10 PM, Michal Hocko wrote:
> On Wed 25-03-20 08:49:14, Aneesh Kumar K.V wrote:
>> Fixes the below crash
>>
>> BUG: Kernel NULL pointer dereference on read at 0x00000000
>> Faulting instruction address: 0xc000000000c3447c
>> Oops: Kernel access of bad area, sig: 11 [#1]
>> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
>> CPU: 11 PID: 7519 Comm: lt-ndctl Not tainted 5.6.0-rc7-autotest #1
>> ...
>> NIP [c000000000c3447c] vmemmap_populated+0x98/0xc0
>> LR [c000000000088354] vmemmap_free+0x144/0x320
>> Call Trace:
>> section_deactivate+0x220/0x240
>
> It would be great to match this to the specific source code.
The crash is due to NULL dereference at
test_bit(idx, ms->usage->subsection_map); due to ms->usage = NULL;
that is explained in later part of the commit.
>
>> __remove_pages+0x118/0x170
>> arch_remove_memory+0x3c/0x150
>> memunmap_pages+0x1cc/0x2f0
>> devm_action_release+0x30/0x50
>> release_nodes+0x2f8/0x3e0
>> device_release_driver_internal+0x168/0x270
>> unbind_store+0x130/0x170
>> drv_attr_store+0x44/0x60
>> sysfs_kf_write+0x68/0x80
>> kernfs_fop_write+0x100/0x290
>> __vfs_write+0x3c/0x70
>> vfs_write+0xcc/0x240
>> ksys_write+0x7c/0x140
>> system_call+0x5c/0x68
>>
>> With commit: d41e2f3bd546 ("mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case")
>> section_mem_map is set to NULL after depopulate_section_mem(). This
>> was done so that pfn_page() can work correctly with kernel config that disables
>> SPARSEMEM_VMEMMAP. With that config pfn_to_page does
>>
>> __section_mem_map_addr(__sec) + __pfn;
>> where
>>
>> static inline struct page *__section_mem_map_addr(struct mem_section *section)
>> {
>> unsigned long map = section->section_mem_map;
>> map &= SECTION_MAP_MASK;
>> return (struct page *)map;
>> }
>>
>> Now with SPASEMEM_VMEMAP enabled, mem_section->usage->subsection_map is used to
>> check the pfn validity (pfn_valid()). Since section_deactivate release
>> mem_section->usage if a section is fully deactivated, pfn_valid() check after
>> a subsection_deactivate cause a kernel crash.
>>
>> static inline int pfn_valid(unsigned long pfn)
>> {
>> ...
>> return early_section(ms) || pfn_section_valid(ms, pfn);
>> }
>>
>> where
>>
>> static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn)
>> {
>
>> int idx = subsection_map_index(pfn);
>>
>> return test_bit(idx, ms->usage->subsection_map);
>> }
>>
>> Avoid this by clearing SECTION_HAS_MEM_MAP when mem_section->usage is freed.
>
> I am sorry, I haven't noticed that during the review of the commit
> mentioned above. This is all subtle as hell, I have to say.
>
> Why do we have to free usage before deactivaing section memmap? Now that
> we have a late section_mem_map reset shouldn't we tear down the usage in
> the same branch?
>
We still need to make the section invalid before we call into
depopulate_section_memmap(). Because architecture like powerpc can share
vmemmap area across sections (16MB mapping of vmemmap area) and we use
vmemmap_popluated() to make that decision.
>> Fixes: d41e2f3bd546 ("mm/hotplug: fix hot remove failure in SPARSEMEM|!VMEMMAP case")
>> Cc: Baoquan He <bhe@...hat.com>
>> Reported-by: Sachin Sant <sachinp@...ux.vnet.ibm.com>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
>> ---
>> mm/sparse.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/mm/sparse.c b/mm/sparse.c
>> index aadb7298dcef..3012d1f3771a 100644
>> --- a/mm/sparse.c
>> +++ b/mm/sparse.c
>> @@ -781,6 +781,8 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>> ms->usage = NULL;
>> }
>> memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
>> + /* Mark the section invalid */
>> + ms->section_mem_map &= ~SECTION_HAS_MEM_MAP;
>
> Btw. this comment is not really helping at all.
That is marking the section invalid so that
static inline int valid_section(struct mem_section *section)
{
return (section && (section->section_mem_map & SECTION_HAS_MEM_MAP));
}
returns false.
> /*
> * section->usage is gone and VMEMMAP's pfn_valid depens
> * on it (see pfn_section_valid)
> */
>> }
>>
>> if (section_is_early && memmap)
>> --
>> 2.25.1
>>
>
Powered by blists - more mailing lists