lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOuPNLiMnHJJNFBbOrMOLmnxU86ROMBaLaeFxviPENCkuKfUVg@mail.gmail.com>
Date:   Mon, 24 May 2021 11:42:23 +0530
From:   Pintu Agarwal <pintu.ping@...il.com>
To:     Sean Nyekjaer <sean@...nix.com>
Cc:     Phillip Lougher <phillip@...ashfs.org.uk>,
        open list <linux-kernel@...r.kernel.org>,
        linux-mtd@...ts.infradead.org, linux-fsdevel@...r.kernel.org
Subject: Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after
 flashing rootfs volume

On Sun, 23 May 2021 at 23:01, Sean Nyekjaer <sean@...nix.com> wrote:
>

> > I have also tried that and it seems the checksum exactly matches.
> > $ md5sum system.squash
> > d301016207cc5782d1634259a5c597f9  ./system.squash
> >
> > On the device:
> > /data/pintu # dd if=/dev/ubi0_0 of=squash_rootfs.img bs=1K count=48476
> > 48476+0 records in
> > 48476+0 records out
> > 49639424 bytes (47.3MB) copied, 26.406276 seconds, 1.8MB/s
> > [12001.375255] dd (2392) used greatest stack depth: 4208 bytes left
> >
> > /data/pintu # md5sum squash_rootfs.img
> > d301016207cc5782d1634259a5c597f9  squash_rootfs.img
> >
> > So, it seems there is no problem with either the original image
> > (unsquashfs) as well as the checksum.
> >
> > Then what else could be the suspect/issue ?
> > If you have any further inputs please share your thoughts.
> >
> > This is the kernel command line we are using:
> > [    0.000000] Kernel command line: ro rootwait
> > console=ttyMSM0,115200,n8 androidboot.hardware=qcom
> > msm_rtb.filter=0x237 androidboot.console=ttyMSM0
> > lpm_levels.sleep_disabled=1 firmware_class.path=/lib/firmware/updates
> > service_locator.enable=1 net.ifnames=0 rootfstype=squashfs
> > root=/dev/ubiblock0_0 ubi.mtd=30 ubi.block=0,0
> >
> > These are few more points to be noted:
> > a) With squashfs we are getting below error:
> > [    4.603156] squashfs: SQUASHFS error: unable to read xattr id index table
> > [...]
> > [    4.980519] Kernel panic - not syncing: VFS: Unable to mount root
> > fs on unknown-block(254,0)
> >
> > b) With ubifs (without squashfs) we are getting below error:
> > [    4.712458] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0,
> > name "rootfs", R/O mode
> > [...]
> > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node type (255 but expected 9)
> > UBIFS error (ubi0:0 pid 1): ubifs_read_node: bad node at LEB
> > 336:250560, LEB mapping status 1
> > Not a node, first 24 bytes:
> > 00000000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> > ff ff ff ff
> >
> > c) While flashing "usrfs" volume (ubi0_1) there is no issue and device
> > boots successfully.
> >
> > d) This issue is happening only after flashing rootfs volume (ubi0_0)
> > and rebooting the device.
> >
> > e) We are using "uefi" and fastboot mechanism to flash the volumes.
> Are you writing the squashfs into the ubi block device with uefi/fastboot?
> >
> > f) Next I wanted to check the read-only UBI volume flashing mechanism
> > within the Kernel itself.
> > Is there a way to try a read-only "rootfs" (squashfs type) ubi volume
> > flashing mechanism from the Linux command prompt ?
> > Or, what are the other ways to verify UBI volume flashing in Linux ?
> >
> > g) I wanted to root-cause, if there is any problem in our UBI flashing
> > logic, or there's something missing on the Linux/Kernel side (squashfs
> > or ubifs) or the way we configure the system.

>
> Have you had it to work? Or is this a new project?
> If you had it to work, i would start bisecting...
>

No, this is still experimental.
Currently we are only able to write to ubi volumes but after that
device is not booting (with rootfs volume update).
However, with "userdata" it is working fine.

I have few more questions to clarify.

a) Is there a way in kernel to do the ubi volume update while the
device is running ?
    I tried "ubiupdatevol" but it does not seem to work.
    I guess it is only to update the empty volume ?
    Or, maybe I don't know how to use it to update the live "rootfs" volume

b) How to verify the volume checksum as soon as we finish writing the
content, since the device is not booting ?
     Is there a way to verify the rootfs checksum at the bootloader or
kernel level before mounting ?

c) We are configuring the ubi volumes in this way. Is it fine ?
[rootfs_volume]
mode=ubi
image=.<path>/system.squash
vol_id=0
vol_type=dynamic
vol_name=rootfs
vol_size=62980096  ==> 60.0625 MiB

Few more info:
----------------------
Our actual squashfs image size:
$ ls -l ./system.squash
rw-rr- 1 pintu users 49639424 ../system.squash

after earse_volume: page-size: 4096, block-size-bytes: 262144,
vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
Thus:
49639424 / 253952 = 195.46 blocks

This then round-off to 196 blocks which does not match exactly.
Is there any issue with this ?

If you have any suggestions to debug further please help us...


Thanks,
Pintu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ