lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Oct 2012 11:56:39 -0700
From:	Yinghai Lu <yinghai@...nel.org>
To:	Konrad Rzeszutek Wilk <konrad@...nel.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...or.com>, Jacob Shin <jacob.shin@....com>,
	Tejun Heo <tj@...nel.org>,
	Stefano Stabellini <stefano.stabellini@...citrix.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 06/19] x86, mm: setup page table in top-down

On Mon, Oct 22, 2012 at 8:06 AM, Konrad Rzeszutek Wilk
<konrad@...nel.org> wrote:
> On Thu, Oct 18, 2012 at 01:50:17PM -0700, Yinghai Lu wrote:
>
> You might want to mention how 'memblock' searches for regions.
> Presumarily it is from top to bottom.

that is not related. Will add explanation about min_pfn_mapped.

>
>
> And also explain the granularity of what the size you are mapping
> _after_ you are done with the PMD_SIZE.

will add reason for step_size
>> +     /* step_size need to be small so pgt_buf from BRK could cover it */
>> +     step_size = PMD_SIZE;
>> +     max_pfn_mapped = 0; /* will get exact value next */
>
> next in the init_rnage_memory_mapping? Might want to spell that out.

the loop and
   init_range_memory_mapping==>init_memory_mapping ==> add_pfn_range_mapped

>
>> +     min_pfn_mapped = real_end >> PAGE_SHIFT;
>> +     last_start = start = real_end;
>
> Might want to add a comment here saying:
>
> "We are looping from the top to the bottom."
>
>> +     while (last_start > ISA_END_ADDRESS) {
>> +             if (last_start > step_size) {
>> +                     start = round_down(last_start - 1, step_size);
>> +                     if (start < ISA_END_ADDRESS)
>> +                             start = ISA_END_ADDRESS;
>> +             } else
>> +                     start = ISA_END_ADDRESS;
>> +             new_mapped_ram_size = init_range_memory_mapping(start,
>> +                                                     last_start);
>> +             last_start = start;
>> +             min_pfn_mapped = last_start >> PAGE_SHIFT;
>> +             if (new_mapped_ram_size > mapped_ram_size)
>> +                     step_size <<= 5;
>
> Should '5' have a #define value?

yes.

>
>> +             mapped_ram_size += new_mapped_ram_size;
>> +     }
>
> It looks like the step_size would keep on increasing on every loop.
> First it would be 2MB, 64MB, then 2GB, and so - until the amount
> of memory that has been mapped is greater then unmapped.
> Is that right?
>
> I am basing that assumption on that the "new_mapped_ram_size"
> would return the size of the newly mapped region (start, last_start)
> in bytes. And the 'mapped_ram_size' is the size of the previously
> mapped region plus all the other ones.
>
> The logic being that  at the start of execution you start with a 2MB,
> compare it to 0, and increase step_size up to 64MB. Then start
> at real_end-2MB-step_size -> real_end-2MB-1. That gets you a 64MB chunk.
>
> Since new_mapped_ram_size (64MB) > mapped_ram_sized (2MB)
> you increase step_size once more.
>
> If so, you should also explain that in the git commit description and
> in the loop logic..

that logic is hard to understand from code?

updated changelog:

---
Get pgt_buf early from BRK, and use it to map PMD_SIZE from top at first.
Then use mapped pages to map more ranges below, and keep looping until
all pages get mapped.

alloc_low_page will use page from BRK at first, after that buffer is used
up, will use memblock to find and reserve pages for page table usage.

Introduce min_pfn_mapped to make sure find new pages from mapped ranges,
that will be updated when lower pages get mapped.

Also add step_size to make sure that don't try to map too big range with
limited mapped pages initially, and increase the step_size when we have
more mapped pages on hand.

At last we can get rid of calculation and find early pgt related code.

---
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ