lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZLjnDzTdXPlM3KY6@dhcp22.suse.cz>
Date:   Thu, 20 Jul 2023 09:49:35 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Ross Zwisler <zwisler@...gle.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Mike Rapoport <rppt@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...e.de>, Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Jiri Bohac <jbohac@...e.cz>
Subject: Re: collision between ZONE_MOVABLE and memblock allocations

[CC Jiri Bohac]

On Wed 19-07-23 16:48:21, Ross Zwisler wrote:
> On Wed, Jul 19, 2023 at 08:14:48AM +0200, Michal Hocko wrote:
> > On Tue 18-07-23 16:01:06, Ross Zwisler wrote:
> > [...]
> > > I do think that we need to fix this collision between ZONE_MOVABLE and memmap
> > > allocations, because this issue essentially makes the movablecore= kernel
> > > command line parameter useless in many cases, as the ZONE_MOVABLE region it
> > > creates will often actually be unmovable.
> > 
> > movablecore is kinda hack and I would be more inclined to get rid of it
> > rather than build more into it. Could you be more specific about your
> > use case?
> 
> The problem that I'm trying to solve is that I'd like to be able to get kernel
> core dumps off machines (chromebooks) so that we can debug crashes.  Because
> the memory used by the crash kernel ("crashkernel=" kernel command line
> option) is consumed the entire time the machine is booted, there is a strong
> motivation to keep the crash kernel as small and as simple as possible.  To
> this end I'm trying to get away without SSD drivers, not having to worry about
> encryption on the SSDs, etc.

This is something Jiri is also looking into.
 
> So, the rough plan right now is:
> 
> 1) During boot set aside some memory that won't contain kernel allocations.
> I'm trying to do this now with ZONE_MOVABLE, but I'm open to better ways.
> 
> We set aside memory for a crash kernel & arm it so that the ZONE_MOVABLE
> region (or whatever non-kernel region) will be set aside as PMEM in the crash
> kernel.  This is done with the memmap=nn[KMG]!ss[KMG] kernel command line
> parameter passed to the crash kernel.
> 
> So, in my sample 4G VM system, I see:
> 
>   # lsmem --split ZONES --output-all
>   RANGE                                  SIZE  STATE REMOVABLE BLOCK NODE   ZONES
>   0x0000000000000000-0x0000000007ffffff  128M online       yes     0    0    None
>   0x0000000008000000-0x00000000bfffffff  2.9G online       yes  1-23    0   DMA32
>   0x0000000100000000-0x000000012fffffff  768M online       yes 32-37    0  Normal
>   0x0000000130000000-0x000000013fffffff  256M online       yes 38-39    0 Movable
>   
>   Memory block size:       128M
>   Total online memory:       4G
>   Total offline memory:      0B
> 
> so I'll pass "memmap=256M!0x130000000" to the crash kernel.
> 
> 2) When we hit a kernel crash, we know (hope?) that the PMEM region we've set
> aside only contains user data, which we don't want to store anyway.  We make a
> filesystem in there, and create a kernel crash dump using 'makedumpfile':
> 
>   mkfs.ext4 /dev/pmem0
>   mount /dev/pmem0 /mnt
>   makedumpfile -c -d 31 /proc/vmcore /mnt/kdump
> 
> We then set up the next full kernel boot to also have this same PMEM region,
> using the same memmap kernel parameter.  We reboot back into a full kernel.
> 
> 3) The next full kernel will be a normal boot with a full networking stack,
> SSD drivers, disk encryption, etc.  We mount up our PMEM filesystem, pull out
> the kdump and either store it somewhere persistent or upload it somewhere.  We
> can then unmount the PMEM and reconfigure it back to system ram so that the
> live system isn't missing memory.
> 
>   ndctl create-namespace --reconfig=namespace0.0 -m devdax -f
>   daxctl reconfigure-device --mode=system-ram dax0.0
> 
> This is the flow I'm trying to support, and have mostly working in a VM,
> except up until now makedumpfile would crash because all the memblock
> structures it needed were in the PMEM area that I had just wiped out by making
> a new filesystem. :)
> 
> Do you see any blockers that would make this infeasible?
> 
> For the non-kernel memory, is the ZONE_MOVABLE path that I'm currently
> pursuing the best option, or would we be better off with your suggestion
> elsewhere in this thread:

The main problem I would see with this approach is that the small
Movable zone you set aside would be easily consumed and reclaimed. That
could generate some unexpected performance artifacts. We used to see
those with small zones or large differences in zone sizes in the past.
But functionally this should work. Or I do not see any fundamental
problems at least.

Jiri is looking at this from a slightly different angle. Very broadly,
he would like to have a dedicated CMA pool and reuse that for the
kernel memory (dropping anything sitting there) when crashing.
GFP_MOVABLE allocations can use CMA pools.

> > If the spefic placement of the movable memory is not important and the only
> > thing that matters is the size and numa locality then an easier to maintain
> > solution would be to simply offline enough memory blocks very early in the
> > userspace bring up and online it back as movable. If offlining fails just
> > try another memblock. This doesn't require any kernel code change.
> 
> If this 2nd way is preferred, can you point me to how I can offline the memory
> blocks & then get them back later in boot?

/bin/echo offline > /sys/devices/system/memory/memory$NUM/state && \
echo online_movable > /sys/devices/system/memory/memory$NUM/state

more in Documentation/admin-guide/mm/memory-hotplug.rst

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ