lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 8 Nov 2017 22:33:51 +0900
From:   Jaewon Kim <>
To:     Joonsoo Kim <>
Cc:     Jaewon Kim <>,
        Andrew Morton <>,,,,,
Subject: Re: [PATCH] mm: page_ext: allocate page extension though first PFN is invalid

2017-11-08 16:52 GMT+09:00 Joonsoo Kim <>:
> On Tue, Nov 07, 2017 at 06:44:47PM +0900, Jaewon Kim wrote:
>> online_page_ext and page_ext_init allocate page_ext for each section, but
>> they do not allocate if the first PFN is !pfn_present(pfn) or
>> !pfn_valid(pfn).
>> Though the first page is not valid, page_ext could be useful for other
>> pages in the section. But checking all PFNs in a section may be time
>> consuming job. Let's check each (section count / 16) PFN, then prepare
>> page_ext if any PFN is present or valid.
> I guess that this kind of section is not so many. And, this is for
> debugging so completeness would be important. It's better to check
> all pfn in the section.
Thank you for your comment.

AFAIK physical memory address depends on HW SoC.
Sometimes a SoC remains few GB address region hole between few GB DRAM
and other few GB DRAM
such as 2GB under 4GB address and 2GB beyond 4GB address and holes between them.
If SoC designs so big hole between actual mapping, I thought too much
time will be spent on just checking all the PFNs.

Anyway if we decide to check all PFNs, I can change patch to t_pfn++ like below.
Please give me comment again.

while (t_pfn <  ALIGN(pfn + 1, PAGES_PER_SECTION)) {
        if (pfn_valid(t_pfn)) {
                valid = true;
-        t_pfn = ALIGN(pfn + 1, PAGES_PER_SECTION >> 4);
+        t_pfn++;

Thank you
Jaewon Kim

> Thanks.

Powered by blists - more mailing lists