lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200910135404.GJ28354@dhcp22.suse.cz>
Date:   Thu, 10 Sep 2020 15:54:04 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Oscar Salvador <osalvador@...e.de>,
        Laurent Dufour <ldufour@...ux.ibm.com>,
        akpm@...ux-foundation.org, rafael@...nel.org,
        nathanl@...ux.ibm.com, cheloha@...ux.ibm.com,
        stable@...r.kernel.org,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: don't rely on system state to detect hot-plug
 operations

On Thu 10-09-20 14:49:28, David Hildenbrand wrote:
> On 10.09.20 14:47, Michal Hocko wrote:
> > On Thu 10-09-20 14:03:48, Oscar Salvador wrote:
> >> On Thu, Sep 10, 2020 at 01:35:32PM +0200, Laurent Dufour wrote:
> >>  
> >>> That points has been raised by David, quoting him here:
> >>>
> >>>> IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
> >>>>
> >>>> Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
> >>>
> >>> Oscar told that he need to investigate further on that.
> >>
> >> I think my reply got lost.
> >>
> >> We can see acpi hotplugs during SYSTEM_SCHEDULING:
> >>
> >> $QEMU -enable-kvm -machine pc -smp 4,sockets=4,cores=1,threads=1 -cpu host -monitor pty \
> >>         -m size=$MEM,slots=255,maxmem=4294967296k  \
> >>         -numa node,nodeid=0,cpus=0-3,mem=512 -numa node,nodeid=1,mem=512 \
> >>         -object memory-backend-ram,id=memdimm0,size=134217728 -device pc-dimm,node=0,memdev=memdimm0,id=dimm0,slot=0 \
> >>         -object memory-backend-ram,id=memdimm1,size=134217728 -device pc-dimm,node=0,memdev=memdimm1,id=dimm1,slot=1 \
> >>         -object memory-backend-ram,id=memdimm2,size=134217728 -device pc-dimm,node=0,memdev=memdimm2,id=dimm2,slot=2 \
> >>         -object memory-backend-ram,id=memdimm3,size=134217728 -device pc-dimm,node=0,memdev=memdimm3,id=dimm3,slot=3 \
> >>         -object memory-backend-ram,id=memdimm4,size=134217728 -device pc-dimm,node=1,memdev=memdimm4,id=dimm4,slot=4 \
> >>         -object memory-backend-ram,id=memdimm5,size=134217728 -device pc-dimm,node=1,memdev=memdimm5,id=dimm5,slot=5 \
> >>         -object memory-backend-ram,id=memdimm6,size=134217728 -device pc-dimm,node=1,memdev=memdimm6,id=dimm6,slot=6 \
> >>
> >> kernel: [    0.753643] __add_memory: nid: 0 start: 0100000000 - 0108000000 (size: 134217728)
> >> kernel: [    0.756950] register_mem_sect_under_node: system_state= 1
> >>
> >> kernel: [    0.760811]  register_mem_sect_under_node+0x4f/0x230
> >> kernel: [    0.760811]  walk_memory_blocks+0x80/0xc0
> >> kernel: [    0.760811]  link_mem_sections+0x32/0x40
> >> kernel: [    0.760811]  add_memory_resource+0x148/0x250
> >> kernel: [    0.760811]  __add_memory+0x5b/0x90
> >> kernel: [    0.760811]  acpi_memory_device_add+0x130/0x300
> >> kernel: [    0.760811]  acpi_bus_attach+0x13c/0x1c0
> >> kernel: [    0.760811]  acpi_bus_attach+0x60/0x1c0
> >> kernel: [    0.760811]  acpi_bus_scan+0x33/0x70
> >> kernel: [    0.760811]  acpi_scan_init+0xea/0x21b
> >> kernel: [    0.760811]  acpi_init+0x2f1/0x33c
> >> kernel: [    0.760811]  do_one_initcall+0x46/0x1f4
> > 
> > Is there any actual usecase for a configuration like this? What is the
> > point to statically define additional memory like this when the same can
> > be achieved on the same command line?
> 
> You can online it movable right away to unplug later.

You can use movable_node for that. IIRC this would only all hotplugable
memory as movable.

> Also, under QEMU, just do a reboot with hotplugged memory and you're in
> the very same situation.

OK, I didn't know that. I thought the memory would be presented as a
normal memory after reboot. Thanks for the clarification.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ