lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHp75VfPBD-9_ddGMNX2jVCynWLS61GLZVva7sM5aj2Sp==vzQ@mail.gmail.com>
Date:   Mon, 11 Feb 2019 13:40:29 +0200
From:   Andy Shevchenko <andy.shevchenko@...il.com>
To:     David Engraf <david.engraf@...go.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Dominik Brodowski <linux@...inikbrodowski.net>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Philippe Ombredanne <pombredanne@...b.com>,
        Arnd Bergmann <arnd@...db.de>,
        Luc Van Oostenryck <luc.vanoostenryck@...il.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RESEND] initramfs: cleanup incomplete rootfs

On Mon, Feb 11, 2019 at 10:49 AM David Engraf <david.engraf@...go.com> wrote:
> On 11.02.19 at 08:56, David Engraf wrote:
> > On 09.02.19 at 11:35, Andy Shevchenko wrote:
> >> On Sat, Feb 9, 2019 at 12:08 AM Andrew Morton
> >> <akpm@...ux-foundation.org> wrote:
> >>> On Fri, 8 Feb 2019 21:45:21 +0200 Andy Shevchenko
> >>> <andy.shevchenko@...il.com> wrote:
> >>>> On Tue, Oct 30, 2018 at 5:22 PM David Engraf
> >>>> <david.engraf@...go.com> wrote:
> >>>>>
> >>>>> Unpacking an external initrd may fail e.g. not enough memory. This
> >>>>> leads
> >>>>> to an incomplete rootfs because some files might be extracted already.
> >>>>> Fixed by cleaning the rootfs so the kernel is not using an incomplete
> >>>>> rootfs.
> >>>>
> >>>> This breaks my setup where I have U-boot provided more size of
> >>>> initramfs than needed. This allows a bit of flexibility to increase or
> >>>> decrease initramfs compressed image without taking care of bootloader.
> >>>> The proper solution is to do this if we sure that we didn't get enough
> >>>> memory, otherwise I can't consider the error fatal to clean up rootfs.
> >>>
> >>> OK, thanks.  Maybe David can suggest a fix - I'll queue up a revert
> >>> meanwhile.
> >>>
> >>> I don't really understand the failure.  Why does an oversized initramfs
> >>> cause unpack_to_rootfs() to fail?
> >>
> >> In my case I have got "Junk in compressed archive". I don't know (I
> >> would check if needed) which exact condition I got  since there are
> >> three places with this message. The file itself smaller than the size
> >> passed through bootparam. So, when decomression is finished
> >> (successfully!) we still have a garbarge in the memory which is not
> >> related to archive. Message per se is okay to have, though I consider
> >> this non-fatal.
> >
> > I can reproduce this special case. The unpacking decompresses the whole
> > size instead of checking the archive size. I will have a look how to get
> > the real archive size.
>
> I did some checks and manually increased the initramfs size but I always
> get the following kernel panic:

We need to be on the same page here.
There are two sizes of initramfs compressed archive:
 1) actual file size;
 2) what is declared by boot loader and provided via boot parameters.

In my case I have the 2) bigger than the actual file size.
Kernel decompresses the initramfs, prints an error that there is junk,
which is understandable and continues to run init, etc.

> Kernel panic - not syncing: junk in compressed archive
> ---[ end Kernel panic - not syncing: junk in compressed archive
>
> The panic was not introduced by my patch. Could you please check if you
> get a panic as well or is your rootfs just empty?

Since your patch applied I get rootfs clean followed by inability to
find working init. So, I have a panic with different reason.

> I also had a look at the decompression in unpack_to_rootfs(). This
> function already ensures unpacking only the real size of the archive.
> But it is called in a loop, thus it unpacks the first archive and then
> tries to unpack the reset of the data which are garbage in my case.

> Is it intended to allow extracting multiple archives as rootfs?

Yes. You can chain up to 64 archives IIRC.

> If not
> we could remove the loop and unpack only the first archive. Otherwise we
> could ignore errors when the first archive was extracted without errors.

Not the first one, but all the first one_s_. Means, that at least one,
when it's first(!),  is decompressed successfully.

-- 
With Best Regards,
Andy Shevchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ