[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <87zhmigeb3.fsf@linux.ibm.com>
Date: Sun, 16 Jun 2019 13:19:36 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: Dan Williams <dan.j.williams@...el.com>, akpm@...ux-foundation.org
Cc: mhocko@...e.com, Pavel Tatashin <pasha.tatashin@...een.com>,
linux-nvdimm@...ts.01.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Jérôme Glisse <jglisse@...hat.com>,
osalvador@...e.de
Subject: Re: [PATCH v9 10/12] mm/devm_memremap_pages: Enable sub-section remap
Dan Williams <dan.j.williams@...el.com> writes:
> Teach devm_memremap_pages() about the new sub-section capabilities of
> arch_{add,remove}_memory(). Effectively, just replace all usage of
> align_start, align_end, and align_size with res->start, res->end, and
> resource_size(res). The existing sanity check will still make sure that
> the two separate remap attempts do not collide within a sub-section (2MB
> on x86).
>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Toshi Kani <toshi.kani@....com>
> Cc: Jérôme Glisse <jglisse@...hat.com>
> Cc: Logan Gunthorpe <logang@...tatee.com>
> Cc: Oscar Salvador <osalvador@...e.de>
> Cc: Pavel Tatashin <pasha.tatashin@...een.com>
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
> ---
> kernel/memremap.c | 61 +++++++++++++++++++++--------------------------------
> 1 file changed, 24 insertions(+), 37 deletions(-)
>
> diff --git a/kernel/memremap.c b/kernel/memremap.c
> index 57980ed4e571..a0e5f6b91b04 100644
> --- a/kernel/memremap.c
> +++ b/kernel/memremap.c
> @@ -58,7 +58,7 @@ static unsigned long pfn_first(struct dev_pagemap *pgmap)
> struct vmem_altmap *altmap = &pgmap->altmap;
> unsigned long pfn;
>
> - pfn = res->start >> PAGE_SHIFT;
> + pfn = PHYS_PFN(res->start);
> if (pgmap->altmap_valid)
> pfn += vmem_altmap_offset(altmap);
> return pfn;
> @@ -86,7 +86,6 @@ static void devm_memremap_pages_release(void *data)
> struct dev_pagemap *pgmap = data;
> struct device *dev = pgmap->dev;
> struct resource *res = &pgmap->res;
> - resource_size_t align_start, align_size;
> unsigned long pfn;
> int nid;
>
> @@ -96,25 +95,21 @@ static void devm_memremap_pages_release(void *data)
> pgmap->cleanup(pgmap->ref);
>
> /* pages are dead and unused, undo the arch mapping */
> - align_start = res->start & ~(PA_SECTION_SIZE - 1);
> - align_size = ALIGN(res->start + resource_size(res), PA_SECTION_SIZE)
> - - align_start;
> -
> - nid = page_to_nid(pfn_to_page(align_start >> PAGE_SHIFT));
> + nid = page_to_nid(pfn_to_page(PHYS_PFN(res->start)));
Why do we not require to align things to subsection size now?
-aneesh
Powered by blists - more mailing lists