[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4E4CD9A9.7090606@hitachi.com>
Date: Thu, 18 Aug 2011 18:21:45 +0900
From: HAYASAKA Mitsuo <mitsuo.hayasaka.hu@...achi.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Namhyung Kim <namhyung@...il.com>,
David Rientjes <rientjes@...gle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
yrl.pp-manager.tt@...achi.com
Subject: Re: [PATCH] avoid null pointer access in vm_struct
Andrew Morton さんは書きました:
> On Wed, 17 Aug 2011 22:28:48 +0900
> Mitsuo Hayasaka <mitsuo.hayasaka.hu@...achi.com> wrote:
>
>> The /proc/vmallocinfo shows information about vmalloc allocations in vmlist
>> that is a linklist of vm_struct. It, however, may access pages field of
>> vm_struct where a page was not allocated, which results in a null pointer
>> access and leads to a kernel panic.
>>
>> Why this happen:
>> For example, in __vmalloc_area_node, the nr_pages field of vm_struct are
>> set to the expected number of pages to be allocated, before the actual
>> pages allocations. At the same time, when the /proc/vmallocinfo is read, it
>> accesses the pages field of vm_struct according to the nr_pages field at
>> show_numa_info(). Thus, a null pointer access happens.
>>
>> Patch:
>> This patch avoids accessing the pages field with unallocated page when
>> show_numa_info() is called. So, it can solve this problem.
>
> Do we have a similar race when running __vunmap() in parallel with
> show_numa_info()?
>
No. This does not happen when running __vunmap() because the vm_struct
is released after it is removed from vmlist.
>> index 7ef0903..e2ec5b0 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -2472,13 +2472,16 @@ static void show_numa_info(struct seq_file *m, struct vm_struct *v)
>> if (NUMA_BUILD) {
>> unsigned int nr, *counters = m->private;
>>
>> - if (!counters)
>> + if (!counters || !v->nr_pages || !v->pages)
>> return;
>>
>> memset(counters, 0, nr_node_ids * sizeof(unsigned int));
>>
>> - for (nr = 0; nr < v->nr_pages; nr++)
>> + for (nr = 0; nr < v->nr_pages; nr++) {
>> + if (!v->pages[nr])
>> + break;
>> counters[page_to_nid(v->pages[nr])]++;
>> + }
>>
>> for_each_node_state(nr, N_HIGH_MEMORY)
>> if (counters[nr])
>
> I think this has memory ordering issues: it requires that this CPU see
> the modification to ->nr_pages and ->pages in the same order as the CPU
> which is writing ->nr_pages, ->pages and ->pages[x]. Perhaps fixable
> by taking vmlist_lock appropriately.
>
> I suspect that the real bug is that __vmalloc_area_node() and its
> caller made the new vmap_area globally visible before it was fully
> initialised. If we were to fix that, the /proc/vmallocinfo read would
> not encounter this vm_struct at all.
>
I agreed.
I'd like to revise __vmalloc_area_node() and submit the patch again.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists