lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOuPNLgrwnqv_=Ux5SeY3XTDG2b0=ntRbciWVshhaVwJYFEZ3g@mail.gmail.com>
Date:   Tue, 25 May 2021 14:52:06 +0530
From:   Pintu Agarwal <pintu.ping@...il.com>
To:     Phillip Lougher <phillip@...ashfs.org.uk>
Cc:     Sean Nyekjaer <sean@...nix.com>,
        open list <linux-kernel@...r.kernel.org>,
        linux-mtd@...ts.infradead.org, linux-fsdevel@...r.kernel.org
Subject: Re: [RESEND]: Kernel 4.14: UBIFS+SQUASHFS: Device fails to boot after
 flashing rootfs volume

On Mon, 24 May 2021 at 12:37, Phillip Lougher <phillip@...ashfs.org.uk> wrote:
>
> > No, this is still experimental.
> > Currently we are only able to write to ubi volumes but after that
> > device is not booting (with rootfs volume update).
> > However, with "userdata" it is working fine.
> >
> > I have few more questions to clarify.
> >
> > a) Is there a way in kernel to do the ubi volume update while the
> > device is running ?
> >     I tried "ubiupdatevol" but it does not seem to work.
> >     I guess it is only to update the empty volume ?
> >     Or, maybe I don't know how to use it to update the live "rootfs" volume
> >
> > b) How to verify the volume checksum as soon as we finish writing the
> > content, since the device is not booting ?
> >      Is there a way to verify the rootfs checksum at the bootloader or
> > kernel level before mounting ?
> >
> > c) We are configuring the ubi volumes in this way. Is it fine ?
> > [rootfs_volume]
> > mode=ubi
> > image=.<path>/system.squash
> > vol_id=0
> > vol_type=dynamic
> > vol_name=rootfs
> > vol_size=62980096  ==> 60.0625 MiB
> >
> > Few more info:
> > ----------------------
> > Our actual squashfs image size:
> > $ ls -l ./system.squash
> > rw-rr- 1 pintu users 49639424 ../system.squash
> >
> > after earse_volume: page-size: 4096, block-size-bytes: 262144,
> > vtbl-count: 2, used-blk: 38, leb-size: 253952, leb-blk-size: 62
> > Thus:
> > 49639424 / 253952 = 195.46 blocks
> >
> > This then round-off to 196 blocks which does not match exactly.
> > Is there any issue with this ?
> >
> > If you have any suggestions to debug further please help us...
> >
> >
> > Thanks,
> > Pintu
>
> Three perhaps obvious questions here:
>
> 1. As an experimental system, are you using a vanilla (unmodified)
>    Linux kernel, or have you made modifications.  If so, how is it
>    modified?
>
> 2. What is the difference between "rootfs" and "userdata"?
>    Have you written exactly the same Squashfs image to "rootfs"
>    and "userdata", and has it worked with "userdata" and not
>    worked with "rootfs".
>
>    So far it is unclear whether "userdata" has worked because
>    you've written different images/data to it.
>
>    In other words tell us exactly what you're writing to "userdata"
>    and what you're writing to "rootfs".  The difference or non-difference
>    may be significant.
>
> 3. The rounding up to a whole 196 blocks should not be a problem.
>    The problem is, obviously, if it is rounding down to 195 blocks,
>    where the tail end of the Squashfs image will be lost.
>
>    Remember this is exactly what the Squashfs error is saying, the image
>    has been truncated.
>
>    You could try adding a lot of padding to the end of the Squashfs image
>    (Squashfs won't care), so it is more than the effective block size,
>    and then writing that, to prevent any rounding down or truncation.
>

Just wanted to share the Good news that the ubi volume flashing is
working now :)
First I have created a small read-only volume (instead of rootfs) and
tried to write to it and then compared the checksum.
Initially when I checked, the checksum was not matching and when I
compared the 2 images I found there were around 8192 blocks containing
FF data at the end of each erase block.
After the fix, this time the checksum matches exactly.

/data/pintu # md5sum test-vol-orig.img
6a8a185ec65fcb212b6b5f72f0b0d206  test-vol-orig.img

/data/pintu # md5sum test-vol-after.img
6a8a185ec65fcb212b6b5f72f0b0d206  test-vol-after.img

Once this is working, I tried with rootfs volume, and this time the
device is booting fine :)

The fix is related to the data-len and data-offset calculation in our
volume write code.
[...]
size += data_offset;
[...]
ubi_block_write(....)
buf_size -= (size - data_offset);
offset += (size - data_offset);
[...]
In the previous case, we were not adding and subtracting the data_offset.

The Kernel command line we are using is this:
[    0.000000] Kernel command line: ro rootwait
console=ttyMSM0,115200,n8 [..skip..] rootfstype=squashfs
root=/dev/mtdblock34 ubi.mtd=30,0,30 [...skip..]

Hope, this parameters are fine (no change here).

Thank you Phillip and Sean for your help.
Phillip I think this checksum trick really helped me in figuring out
the root cause :)

Glad to work with you...

Thanks,
Pintu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ