lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 22 Jun 2023 12:45:40 +0530
From:   Tarun Sahu <tsahu@...ux.ibm.com>
To:     "Verma, Vishal L" <vishal.l.verma@...el.com>,
        "Schofield, Alison" <alison.schofield@...el.com>
Cc:     "Williams, Dan J" <dan.j.williams@...el.com>,
        "Jiang, Dave" <dave.jiang@...el.com>,
        "linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>,
        "nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
        "jaypatel@...ux.ibm.com" <jaypatel@...ux.ibm.com>
Subject: Re: [PATCH] dax/kmem: Pass valid argument to
 memory_group_register_static


Hi Vishal,

"Verma, Vishal L" <vishal.l.verma@...el.com> writes:

> On Wed, 2023-06-21 at 11:36 +0530, Tarun Sahu wrote:
>> Hi Alison,
>> 
>> Alison Schofield <alison.schofield@...el.com> writes:
>> 
>> > On Tue, Jun 20, 2023 at 07:33:32PM +0530, Tarun Sahu wrote:
>> > > memory_group_register_static takes maximum number of pages as the argument
>> > > while dev_dax_kmem_probe passes total_len (in bytes) as the argument.
>> > 
>> > This sounds like a fix. An explanation of the impact and a fixes tag
>> > may be needed. Also, wondering how you found it.
>> > 
>> Yes, it is a fix, I found it during dry code walk-through.
>> There is not any impact as such. As,
>> memory_group_register_static just set the max_pages limit which
>> is used in auto_movable_zone_for_pfn to determine the zone.
>> 
>> which might cause these condition to behave differently,
>> 
>> This will be true always so jump will happen to kernel_zone
>>         if (!auto_movable_can_online_movable(NUMA_NO_NODE, group, nr_pages))
>>                 goto kernel_zone;
>> ---
>> kernel_zone:
>>         return default_kernel_zone_for_pfn(nid, pfn, nr_pages);
>> 
>> ---
>> 
>> Here, In below, zone_intersects compare range will be larger as nr_pages
>> will be higher (derived from total_len passed in dev_dax_kmem_probe).
>> 
>> static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn,
>>                 unsigned long nr_pages)
>> {
>>         struct pglist_data *pgdat = NODE_DATA(nid);
>>         int zid;
>> 
>>         for (zid = 0; zid < ZONE_NORMAL; zid++) {
>>                 struct zone *zone = &pgdat->node_zones[zid];
>> 
>>                 if (zone_intersects(zone, start_pfn, nr_pages))
>>                         return zone;
>>         }
>> 
>>         return &pgdat->node_zones[ZONE_NORMAL];
>> }
>> 
>> In Mostly cases, ZONE_NORMAL will be returned. But there is no
>> crash/panic issues involved here, only decision making on selecting zone
>> is affected.
>> 
>
> Hi Tarun,
>
> Good find! With a Fixes tag, and perhaps inclusion of a bit more of
> this detail described in the commit message, feel free to add:
>
Thanks for reviewing, sent the updated version.

> Reviewed-by: Vishal Verma <vishal.l.verma@...el.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ