[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b7e620efa0de6b9f7a8ae9ce51d8dd562f384cdc.camel@intel.com>
Date: Wed, 21 Jun 2023 06:42:45 +0000
From: "Verma, Vishal L" <vishal.l.verma@...el.com>
To: "Schofield, Alison" <alison.schofield@...el.com>,
"tsahu@...ux.ibm.com" <tsahu@...ux.ibm.com>
CC: "Williams, Dan J" <dan.j.williams@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>,
"nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
"jaypatel@...ux.ibm.com" <jaypatel@...ux.ibm.com>
Subject: Re: [PATCH] dax/kmem: Pass valid argument to
memory_group_register_static
On Wed, 2023-06-21 at 11:36 +0530, Tarun Sahu wrote:
> Hi Alison,
>
> Alison Schofield <alison.schofield@...el.com> writes:
>
> > On Tue, Jun 20, 2023 at 07:33:32PM +0530, Tarun Sahu wrote:
> > > memory_group_register_static takes maximum number of pages as the argument
> > > while dev_dax_kmem_probe passes total_len (in bytes) as the argument.
> >
> > This sounds like a fix. An explanation of the impact and a fixes tag
> > may be needed. Also, wondering how you found it.
> >
> Yes, it is a fix, I found it during dry code walk-through.
> There is not any impact as such. As,
> memory_group_register_static just set the max_pages limit which
> is used in auto_movable_zone_for_pfn to determine the zone.
>
> which might cause these condition to behave differently,
>
> This will be true always so jump will happen to kernel_zone
> if (!auto_movable_can_online_movable(NUMA_NO_NODE, group, nr_pages))
> goto kernel_zone;
> ---
> kernel_zone:
> return default_kernel_zone_for_pfn(nid, pfn, nr_pages);
>
> ---
>
> Here, In below, zone_intersects compare range will be larger as nr_pages
> will be higher (derived from total_len passed in dev_dax_kmem_probe).
>
> static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn,
> unsigned long nr_pages)
> {
> struct pglist_data *pgdat = NODE_DATA(nid);
> int zid;
>
> for (zid = 0; zid < ZONE_NORMAL; zid++) {
> struct zone *zone = &pgdat->node_zones[zid];
>
> if (zone_intersects(zone, start_pfn, nr_pages))
> return zone;
> }
>
> return &pgdat->node_zones[ZONE_NORMAL];
> }
>
> In Mostly cases, ZONE_NORMAL will be returned. But there is no
> crash/panic issues involved here, only decision making on selecting zone
> is affected.
>
Hi Tarun,
Good find! With a Fixes tag, and perhaps inclusion of a bit more of
this detail described in the commit message, feel free to add:
Reviewed-by: Vishal Verma <vishal.l.verma@...el.com>
Powered by blists - more mailing lists