lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 9 Nov 2018 21:07:50 -0500
From:   Pavel Tatashin <pasha.tatashin@...een.com>
To:     Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        sparclinux@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-nvdimm@...ts.01.org, davem@...emloft.net,
        pavel.tatashin@...rosoft.com, mhocko@...e.com, mingo@...nel.org,
        kirill.shutemov@...ux.intel.com, dan.j.williams@...el.com,
        dave.jiang@...el.com, rppt@...ux.vnet.ibm.com, willy@...radead.org,
        vbabka@...e.cz, khalid.aziz@...cle.com, ldufour@...ux.vnet.ibm.com,
        mgorman@...hsingularity.net, yi.z.zhang@...ux.intel.com
Subject: Re: [mm PATCH v5 5/7] mm: Move hot-plug specific memory init into
 separate functions and optimize

On 18-11-05 13:19:50, Alexander Duyck wrote:
> This patch is going through and combining the bits in memmap_init_zone and
> memmap_init_zone_device that are related to hotplug into a single function
> called __memmap_init_hotplug.
> 
> I also took the opportunity to integrate __init_single_page's functionality
> into this function. In doing so I can get rid of some of the redundancy
> such as the LRU pointers versus the pgmap.

Please don't do it, __init_single_page() is hard function to optimize,
do not copy its code. Instead could you you split __init_single_page()
in two parts, something like this:

static inline init_single_page_nolru(struct page *page, unsigned long pfn,
                                       unsigned long zone, int nid) {
        mm_zero_struct_page(page);
        set_page_links(page, zone, nid, pfn);
        init_page_count(page);
        page_mapcount_reset(page);
        page_cpupid_reset_last(page);
#ifdef WANT_PAGE_VIRTUAL
        /* The shift won't overflow because ZONE_NORMAL is below 4G. */
        if (!is_highmem_idx(zone))
                set_page_address(page, __va(pfn << PAGE_SHIFT));
#endif
}


static void __meminit init_single_page(struct page *page, unsigned long pfn, 
                                unsigned long zone, int nid) 
{
        init_single_page_nolru(page, pfn, zone, nid);
        INIT_LIST_HEAD(&page->lru);
}

And call init_single_page_nolru() from __init_pageblock() ? Also, remove
WANT_PAGE_VIRTUAL optimization, I do not think it worse it.

The rest looks very good, please do the above change.

Reviewed-by: Pavel Tatashin <pasha.tatashin@...een.com>

> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> ---

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ