[<prev] [next>] [day] [month] [year] [list]
Message-ID: <m1ocdshjfh.fsf@fess.ebiederm.org>
Date: Tue, 27 Jul 2010 11:24:34 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Milton Miller <miltonm@....com>
Cc: WANG Cong <amwang@...hat.com>, Neil Horman <nhorman@...driver.com>,
Neil Horman <nhorman@...hat.com>,
huang ying <huang.ying.caritas@...il.com>,
<linux-kernel@...r.kernel.org>, <linuxppc-dev@...ts.ozlabs.org>,
<kexec@...ts.infadead.org>
Subject: Re: [Patch v2] kexec: increase max of kexec segments and use dynamic allocation
Milton Miller <miltonm@....com> writes:
> [ Added kexec at lists.infradead.org and linuxppc-dev@...ts.ozlabs.org ]
>
>>
>> Currently KEXEC_SEGMENT_MAX is only 16 which is too small for machine with
>> many memory ranges. When hibernate on a machine with disjoint memory we do
>> need one segment for each memory region. Increase this hard limit to 16K
>> which is reasonably large.
>>
>> And change ->segment from a static array to a dynamically allocated memory.
>>
>> Cc: Neil Horman <nhorman@...hat.com>
>> Cc: huang ying <huang.ying.caritas@...il.com>
>> Cc: Eric W. Biederman <ebiederm@...ssion.com>
>> Signed-off-by: WANG Cong <amwang@...hat.com>
>>
>> ---
>> diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
>> index ed31a29..f115585 100644
>> --- a/arch/powerpc/kernel/machine_kexec_64.c
>> +++ b/arch/powerpc/kernel/machine_kexec_64.c
>> @@ -131,10 +131,7 @@ static void copy_segments(unsigned long ind)
>> void kexec_copy_flush(struct kimage *image)
>> {
>> long i, nr_segments = image->nr_segments;
>> - struct kexec_segment ranges[KEXEC_SEGMENT_MAX];
>> -
>> - /* save the ranges on the stack to efficiently flush the icache */
>> - memcpy(ranges, image->segment, sizeof(ranges));
>> + struct kexec_segment range;
>
> I'm glad you found our copy on the stack and removed the stack overflow
> that comes with this bump, but ...
>
>>
>> /*
>> * After this call we may not use anything allocated in dynamic
>> @@ -148,9 +145,11 @@ void kexec_copy_flush(struct kimage *image)
>> * we need to clear the icache for all dest pages sometime,
>> * including ones that were in place on the original copy
>> */
>> - for (i = 0; i < nr_segments; i++)
>> - flush_icache_range((unsigned long)__va(ranges[i].mem),
>> - (unsigned long)__va(ranges[i].mem + ranges[i].memsz));
>> + for (i = 0; i < nr_segments; i++) {
>> + memcpy(&range, &image->segment[i], sizeof(range));
>> + flush_icache_range((unsigned long)__va(range.mem),
>> + (unsigned long)__va(range.mem + range.memsz));
>> + }
>> }
>
> This is executed after the copy, so as it says,
> "we may not use anything allocated in dynamic memory".
>
> We could allocate control pages to copy the segment list into.
> Actually ppc64 doesn't use the existing control page, but that
> is only 4kB today.
>
> We need the list to icache flush all the pages in all the segments.
> The as the indirect list doesn't have pages that were allocated at
> their destination.
An interesting point.
> Or maybe the icache flush should be done in the generic code
> like it does for crash load segments?
Please. I don't quite understand the icache flush requirement.
But we really should not be looking at the segments in the
architecture specific code.
Ideally we would only keep the segment information around for
the duration of the kexec_load syscall and not have it when
it comes time to start the second kernel.
I am puzzled. We should be completely replacing the page tables so
can't we just do a global flush? Perhaps I am being naive about what
is required for a ppc flush.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists