lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOJsxLG+o7NwuAfamPzsPJC6K_TRAO_J_W=Nn8yj9bJiqxp=Xg@mail.gmail.com>
Date:	Mon, 3 Sep 2012 09:26:46 +0300
From:	Pekka Enberg <penberg@...nel.org>
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...or.com>, Jacob Shin <jacob.shin@....com>,
	Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()

On Mon, Sep 3, 2012 at 9:17 AM, Yinghai Lu <yinghai@...nel.org> wrote:
> On Sun, Sep 2, 2012 at 10:50 PM, Pekka Enberg <penberg@...nel.org> wrote:
>> On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@...nel.org> wrote:
>>> This one intend to fix bugs:
>>> when efi booting have too many memmap entries, will need to double memblock
>>> memory array or reserved array.
>>
>> Okay, why do we need to do that?
>
> memblock initial memory only have 128 entry, and some EFI system could
> have more entries than that.
>
> So during memblock_x86_fill need to double that array.
>
> and efi_reserve_boot_services() could make thing more worse. aka need
> more entries in memblock.memory.regions.

Aah. Care to put that information in the changelog?

>>> +void  __init early_init_mem_mapping(void)
>>> +{
>>> +       unsigned long tables;
>>> +       phys_addr_t base;
>>> +       unsigned long start = 0, end = ISA_END_ADDRESS;
>>> +
>>> +       probe_page_size_mask();
>>> +
>>> +       if (max_pfn_mapped)
>>> +               return;
>>
>> I find this confusing - what is this protecting for? Why is
>> 'max_pfn_mapped' set when someone calls early_init_mem_mappings()?
>
> for 32 bit, it will non zero max_pfn_mapped set in head_32.S

OK, that's why my grep missed it. A comment would be nice.

>> Side note: we have multiple "pfn_mapped" globals and it's not at all
>> obvious to me what the semantics for them are. Maybe adding a comment
>> or two in arch/x86/include/asm/page_types.h would help.
>
> move the comments  from arch/x86/kernel/setup.c to that header file ?

Yup, or move the globals together with the comment to arch/x86/mm/init.c.

That said, max_pfn_high_mapped really ought to be kept together with
the other "pfn_mapped" globals and the comment should be updated.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ