[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150928093551.GB2115@linux-rxt1.site>
Date: Mon, 28 Sep 2015 17:35:51 +0800
From: joeyli <jlee@...e.com>
To: Baoquan He <bhe@...hat.com>
Cc: "Lee, Chun-Yi" <joeyli.kernel@...il.com>,
Vivek Goyal <vgoyal@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
Stephen Rothwell <sfr@...b.auug.org.au>,
Viresh Kumar <viresh.kumar@...aro.org>,
Takashi Iwai <tiwai@...e.de>,
Jiang Liu <jiang.liu@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kexec: fix out of the ELF headers buffer issue in
syscall kexec_file_load()
Hi,
On Mon, Sep 28, 2015 at 03:16:41PM +0800, Baoquan He wrote:
> Hi Chun-Yi,
>
> On 09/28/15 at 02:41pm, Lee, Chun-Yi wrote:
> > On big machines have CPU number that's very nearly to consume whole ELF
> > headers buffer that's page aligned, 4096, 8192... Then the page fault error
> > randomly happened.
> >
> > This patch modified the code in fill_up_crash_elf_data() by using
> > walk_system_ram_res() instead of walk_system_ram_range() to count the max
> > number of crash memory ranges. That's because the walk_system_ram_range()
> > filters out small memory regions that reside the same page, but
> > walk_system_ram_res() does not.
> >
> > The oringial page fault issue sometimes happened on big machines when
> > preparing ELF headers:
> >
> > [ 305.291522] BUG: unable to handle kernel paging request at ffffc90613fc9000
> > [ 305.299621] IP: [<ffffffff8103d645>] prepare_elf64_ram_headers_callback+0x165/0x260
> > [ 305.308300] PGD e000032067 PUD 6dcbec54067 PMD 9dc9bdeb067 PTE 0
> > [ 305.315393] Oops: 0002 [#1] SMP
> > [...snip]
> > [ 305.420953] task: ffff8e1c01ced600 ti: ffff8e1c03ec2000 task.ti: ffff8e1c03ec2000
> > [ 305.429292] RIP: 0010:[<ffffffff8103d645>] [<ffffffff8103d645>] prepare_elf64_ra
> > m_headers_callback+0x165/0x260
> > [...snip]
> >
> > After tracing prepare_elf64_headers() and prepare_elf64_ram_headers_callback(),
> > the code uses walk_system_ram_res() to fill-in crash memory regions information
> > to program header, so it counts those small memory regions that reside in a
> > page area. But, when kernel was using walk_system_ram_range() in
> > fill_up_crash_elf_data() to count the number of crash memory regions, it
> > filters out small regions.
> >
> > I printed those small memory regions, for example:
> >
> > kexec: Get nr_ram ranges. vaddr=0xffff880077592258 paddr=0x77592258, sz=0xdc0
> >
> > Base on the logic of walk_system_ram_range(), this memory region will be
> > filter out:
> >
> > pfn = (0x77592258 + 0x1000 - 1) >> 12 = 0x77593
> > end_pfn = (0x77592258 + 0xfc0 -1 + 1) >> 12 = 0x77593
> > end_pfn - pfn = 0x77593 - 0x77593 = 0 <=== if (end_pfn > pfn) [FAIL]
> >
> > So, the max_nr_ranges that counted by kernel doesn't include small memory
> > regions. That causes the page fault issue happened in later code path for
> > preparing EFL headers,
> >
> > This issue was hided on small machine that doesn't have too many CPU because
> > the free space of ELF headers buffer can cover the number of small memory
> > regions. But, when the machine has more CPUs or the number of memory regions
> > very nearly to consume whole page aligned buffer, e.g. 4096, 8192... Then
> > issue will happen randomly.
>
> It's a good finding and fix sounds reasonable. I didn't get why too many
> CPUs will cause this bug. From your big machine can you check which
> regions they are and what they are used for? I guess you mean the
> crash_notes region, but not very sure.
>
In prepare_elf64_headers, the logic to allocate ELF header buffer is:
/* extra phdr for vmcoreinfo elf note */
nr_phdr = nr_cpus + 1;
nr_phdr += ced->max_nr_ranges;
/*
* kexec-tools creates an extra PT_LOAD phdr for kernel text mapping
* area on x86_64 (ffffffff80000000 - ffffffffa0000000).
* I think this is required by tools like gdb. So same physical
* memory will be mapped in two elf headers. One will contain kernel
* text virtual addresses and other will have __va(physical) addresses.
*/
nr_phdr++;
elf_sz = sizeof(Elf64_Ehdr) + nr_phdr * sizeof(Elf64_Phdr);
elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);
So whole buffer will be consumed as following:
0 4096
+------------+--------------------+--------------------+---------------------------+---------------------------+---------+
| ELF header | each cpu PT_NOTE...| vmcoreinfo PT_NOTE | kernel text region PT_NOTE| PT_NOTE for memory regions| free |
| (64 bytes) | (n * 56 bytes) | (56 bytes) | (56 bytes) | (n * 56 bytes) | |
+------------+--------------------+--------------------+---------------------------+---------------------------+---------+
When the free space can cover the number of small memory regions, means the
difference between walk_system_ram_range() and walk_system_ram_res(), then
this issue will not trigger.
But, when the CPU number grows to very nearly to consume whole 4096 buffer
then the issue will be happen.
Thanks a lot!
Joey Lee
> >
> > Signed-off-by: Lee, Chun-Yi <jlee@...e.com>
> > ---
> > arch/x86/kernel/crash.c | 5 ++---
> > 1 file changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
> > index e068d66..ad273b3d 100644
> > --- a/arch/x86/kernel/crash.c
> > +++ b/arch/x86/kernel/crash.c
> > @@ -185,8 +185,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
> > }
> >
> > #ifdef CONFIG_KEXEC_FILE
> > -static int get_nr_ram_ranges_callback(unsigned long start_pfn,
> > - unsigned long nr_pfn, void *arg)
> > +static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg)
> > {
> > int *nr_ranges = arg;
> >
> > @@ -214,7 +213,7 @@ static void fill_up_crash_elf_data(struct crash_elf_data *ced,
> >
> > ced->image = image;
> >
> > - walk_system_ram_range(0, -1, &nr_ranges,
> > + walk_system_ram_res(0, -1, &nr_ranges,
> > get_nr_ram_ranges_callback);
> >
> > ced->max_nr_ranges = nr_ranges;
> > --
> > 2.1.4
> >
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists