lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4C5A22B0.7050207@redhat.com>
Date:	Thu, 05 Aug 2010 10:32:16 +0800
From:	Cong Wang <amwang@...hat.com>
To:	Milton Miller <miltonm@....com>
CC:	Neil Horman <nhorman@...driver.com>,
	Neil Horman <nhorman@...hat.com>,
	huang ying <huang.ying.caritas@...il.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
	kexec@...ts.infadead.org
Subject: Re: [Patch v2] kexec: increase max of kexec segments and use dynamic
 allocation

(Ping Milton...)

On 07/29/10 14:42, Cong Wang wrote:
> On 07/27/10 18:00, Milton Miller wrote:
>> [ Added kexec at lists.infradead.org and linuxppc-dev@...ts.ozlabs.org ]
>>
>>>
>>> Currently KEXEC_SEGMENT_MAX is only 16 which is too small for machine
>>> with
>>> many memory ranges. When hibernate on a machine with disjoint memory
>>> we do
>>> need one segment for each memory region. Increase this hard limit to 16K
>>> which is reasonably large.
>>>
>>> And change ->segment from a static array to a dynamically allocated
>>> memory.
>>>
>>> Cc: Neil Horman<nhorman@...hat.com>
>>> Cc: huang ying<huang.ying.caritas@...il.com>
>>> Cc: Eric W. Biederman<ebiederm@...ssion.com>
>>> Signed-off-by: WANG Cong<amwang@...hat.com>
>>>
>>> ---
>>> diff --git a/arch/powerpc/kernel/machine_kexec_64.c
>>> b/arch/powerpc/kernel/machine_kexec_64.c
>>> index ed31a29..f115585 100644
>>> --- a/arch/powerpc/kernel/machine_kexec_64.c
>>> +++ b/arch/powerpc/kernel/machine_kexec_64.c
>>> @@ -131,10 +131,7 @@ static void copy_segments(unsigned long ind)
>>> void kexec_copy_flush(struct kimage *image)
>>> {
>>> long i, nr_segments = image->nr_segments;
>>> - struct kexec_segment ranges[KEXEC_SEGMENT_MAX];
>>> -
>>> - /* save the ranges on the stack to efficiently flush the icache */
>>> - memcpy(ranges, image->segment, sizeof(ranges));
>>> + struct kexec_segment range;
>>
>> I'm glad you found our copy on the stack and removed the stack overflow
>> that comes with this bump, but ...
>>
>>>
>>> /*
>>> * After this call we may not use anything allocated in dynamic
>>> @@ -148,9 +145,11 @@ void kexec_copy_flush(struct kimage *image)
>>> * we need to clear the icache for all dest pages sometime,
>>> * including ones that were in place on the original copy
>>> */
>>> - for (i = 0; i< nr_segments; i++)
>>> - flush_icache_range((unsigned long)__va(ranges[i].mem),
>>> - (unsigned long)__va(ranges[i].mem + ranges[i].memsz));
>>> + for (i = 0; i< nr_segments; i++) {
>>> + memcpy(&range,&image->segment[i], sizeof(range));
>>> + flush_icache_range((unsigned long)__va(range.mem),
>>> + (unsigned long)__va(range.mem + range.memsz));
>>> + }
>>> }
>>
>> This is executed after the copy, so as it says,
>> "we may not use anything allocated in dynamic memory".
>>
>> We could allocate control pages to copy the segment list into.
>> Actually ppc64 doesn't use the existing control page, but that
>> is only 4kB today.
>>
>> We need the list to icache flush all the pages in all the segments.
>> The as the indirect list doesn't have pages that were allocated at
>> their destination.
>>
>> Or maybe the icache flush should be done in the generic code
>> like it does for crash load segments?
>>
>
> I don't get the point here, according to the comments,
> it is copied into stack because of efficiency.
>


-- 
The opposite of love is not hate, it's indifference.
  - Elie Wiesel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ