[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2254cfe1-5cd3-eedc-1f24-8e011dcf3575@microsoft.com>
Date: Fri, 21 Sep 2018 19:50:18 +0000
From: Pasha Tatashin <Pavel.Tatashin@...rosoft.com>
To: Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>
CC: "mhocko@...e.com" <mhocko@...e.com>,
"dave.jiang@...el.com" <dave.jiang@...el.com>,
"mingo@...nel.org" <mingo@...nel.org>,
"dave.hansen@...el.com" <dave.hansen@...el.com>,
"jglisse@...hat.com" <jglisse@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"logang@...tatee.com" <logang@...tatee.com>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v4 3/5] mm: Defer ZONE_DEVICE page initialization to the
point where we init pgmap
On 9/20/18 6:29 PM, Alexander Duyck wrote:
> The ZONE_DEVICE pages were being initialized in two locations. One was with
> the memory_hotplug lock held and another was outside of that lock. The
> problem with this is that it was nearly doubling the memory initialization
> time. Instead of doing this twice, once while holding a global lock and
> once without, I am opting to defer the initialization to the one outside of
> the lock. This allows us to avoid serializing the overhead for memory init
> and we can instead focus on per-node init times.
>
> One issue I encountered is that devm_memremap_pages and
> hmm_devmmem_pages_create were initializing only the pgmap field the same
> way. One wasn't initializing hmm_data, and the other was initializing it to
> a poison value. Since this is something that is exposed to the driver in
> the case of hmm I am opting for a third option and just initializing
> hmm_data to 0 since this is going to be exposed to unknown third party
> drivers.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> +void __ref memmap_init_zone_device(struct zone *zone,
> + unsigned long start_pfn,
> + unsigned long size,
> + struct dev_pagemap *pgmap)
> +{
> + unsigned long pfn, end_pfn = start_pfn + size;
> + struct pglist_data *pgdat = zone->zone_pgdat;
> + unsigned long zone_idx = zone_idx(zone);
> + unsigned long start = jiffies;
> + int nid = pgdat->node_id;
> +
> + if (WARN_ON_ONCE(!pgmap || !is_dev_zone(zone)))
> + return;
> +
> + /*
> + * The call to memmap_init_zone should have already taken care
> + * of the pages reserved for the memmap, so we can just jump to
> + * the end of that region and start processing the device pages.
> + */
> + if (pgmap->altmap_valid) {
> + struct vmem_altmap *altmap = &pgmap->altmap;
> +
> + start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
> + size = end_pfn - start_pfn;
> + }
> +
> + for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> + struct page *page = pfn_to_page(pfn);
> +
> + __init_single_page(page, pfn, zone_idx, nid);
> +
> + /*
> + * Mark page reserved as it will need to wait for onlining
> + * phase for it to be fully associated with a zone.
> + *
> + * We can use the non-atomic __set_bit operation for setting
> + * the flag as we are still initializing the pages.
> + */
> + __SetPageReserved(page);
> +
> + /*
> + * ZONE_DEVICE pages union ->lru with a ->pgmap back
> + * pointer and hmm_data. It is a bug if a ZONE_DEVICE
> + * page is ever freed or placed on a driver-private list.
> + */
> + page->pgmap = pgmap;
> + page->hmm_data = 0;
__init_single_page()
mm_zero_struct_page()
Takes care of zeroing, no need to do another store here.
Looks good otherwise.
Reviewed-by: Pavel Tatashin <pavel.tatashin@...rosoft.com>
Powered by blists - more mailing lists