lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Jun 2021 11:20:18 +0200
From:   Rasmus Villemoes <>
To:     Oliver Sang <>
Cc:     Linus Torvalds <>,
        Luis Chamberlain <>,
        Jessica Yu <>, Borislav Petkov <>,
        Jonathan Corbet <>,
        Greg Kroah-Hartman <>,
        Nick Desaulniers <>,
        Takashi Iwai <>,
        Andrew Morton <>,
        LKML <>,,
Subject: Re: [init/initramfs.c] e7cb072eb9: invoked_oom-killer:gfp_mask=0x

On 11/06/2021 10.48, Oliver Sang wrote:
> hi Rasmus,
> On Tue, Jun 08, 2021 at 09:42:58AM +0200, Rasmus Villemoes wrote:
>> On 07/06/2021 16.44, kernel test robot wrote:

>> Also, I don't have 16G to give to a virtual machine. I tried running the
>> bzImage with that modules.cgz under qemu with some naive parameters just
>> to get some output [1], but other than failing because there's no rootfs
>> to mount (as expected), I only managed to make it fail when providing
>> too little memory (the .cgz is around 70M, decompressed about 200M -
>> giving '-m 1G' to qemu works fine). You mention the vmalloc= argument,
>> but I can't make the decompression fail when passing either vmalloc=128M
>> or vmalloc=512M or no vmalloc= at all.
> sorry about this. we also tried to follow exactly above steps to test on
> some local machine (8G memory), but cannot reproduce. we are analyzing
> what's the diference in our automaion run in test cluster, which reproduced
> the issue consistently. will update you when we have findings.

OK. It's really odd that providing the VM with _more_ memory makes it
fail (other then the obvious failure in the other direction when there's
simply not enough memory for the unpacked initramfs itself). But
unfortunately that also sounds like I won't be able to reproduce with
the HW I have.

>> As an extra data point, what happens if you add initramfs_async=0 to the
>> command line?
> yes, we tested this before sending out the report. the issue gone
> if initramfs_async=0 is added.

Hm. Sounds like some initcall after rootfs_initcall time must
allocate/hog a lot of memory, perhaps with some heuristic depending on
how much is available.

Can you try with initcall_debug=1? I think that should produce a lot of
output, hopefully that would make it possible to see which initcalls
have been done just prior to (or while) the initramfs unpacking hits ENOMEM.


Powered by blists - more mailing lists