lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 6 Oct 2023 14:33:27 +0200
From:   David Hildenbrand <david@...hat.com>
To:     "Verma, Vishal L" <vishal.l.verma@...el.com>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "Jiang, Dave" <dave.jiang@...el.com>,
        "osalvador@...e.de" <osalvador@...e.de>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Cc:     "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "Huang, Ying" <ying.huang@...el.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "aneesh.kumar@...ux.ibm.com" <aneesh.kumar@...ux.ibm.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-cxl@...r.kernel.org" <linux-cxl@...r.kernel.org>,
        "Hocko, Michal" <mhocko@...e.com>,
        "nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>,
        "jmoyer@...hat.com" <jmoyer@...hat.com>,
        "Jonathan.Cameron@...wei.com" <Jonathan.Cameron@...wei.com>
Subject: Re: [PATCH v4 1/2] mm/memory_hotplug: split memmap_on_memory requests
 across memblocks

On 03.10.23 22:03, Verma, Vishal L wrote:
> On Mon, 2023-10-02 at 11:28 +0200, David Hildenbrand wrote:
>>
>>> +
>>> +static int __ref try_remove_memory(u64 start, u64 size)
>>> +{
>>> +       int rc, nid = NUMA_NO_NODE;
>>> +
>>> +       BUG_ON(check_hotplug_memory_range(start, size));
>>> +
>>> +       /*
>>> +        * All memory blocks must be offlined before removing memory.  Check
>>> +        * whether all memory blocks in question are offline and return error
>>> +        * if this is not the case.
>>> +        *
>>> +        * While at it, determine the nid. Note that if we'd have mixed nodes,
>>> +        * we'd only try to offline the last determined one -- which is good
>>> +        * enough for the cases we care about.
>>> +        */
>>> +       rc = walk_memory_blocks(start, size, &nid, check_memblock_offlined_cb);
>>> +       if (rc)
>>> +               return rc;
>>> +
>>> +       /*
>>> +        * For memmap_on_memory, the altmaps could have been added on
>>> +        * a per-memblock basis. Loop through the entire range if so,
>>> +        * and remove each memblock and its altmap.
>>> +        */
>>> +       if (mhp_memmap_on_memory()) {
>>> +               unsigned long memblock_size = memory_block_size_bytes();
>>> +               u64 cur_start;
>>> +
>>> +               for (cur_start = start; cur_start < start + size;
>>> +                    cur_start += memblock_size)
>>> +                       __try_remove_memory(nid, cur_start, memblock_size);
>>> +       } else {
>>> +               __try_remove_memory(nid, start, size);
>>> +       }
>>> +
>>>          return 0;
>>>    }
>>
>> Why is the firmware, memblock and nid handling not kept in this outer
>> function?
>>
>> We really shouldn't be doing per memory block what needs to be done per
>> memblock: remove_memory_block_devices() and arch_remove_memory().
> 
> 
> Ah yes makes sense since we only do create_memory_block_devices() and
> arch_add_memory() in the per memory block inner loop during addition.
> 
> How should the locking work in this case though?

Sorry, I had to process a family NMI the last couple of days.

> 
> The original code holds the mem_hotplug_begin() lock over
> arch_remove_memory() and all of the nid and memblock stuff. Should I
> just hold the lock and release it in the inner loop for each memory
> block, and then also acquire and release it separately for the memblock
> and nid stuff in the outer function?

I think we have to hold it over the whole operation.

I saw that you sent a v5, I'll comment there.


-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ