[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOuPNLghc1ktLrOEf8PN+snMB3QZG-LwzPbd3kGzrhGz8mEAVg@mail.gmail.com>
Date: Fri, 29 Oct 2021 21:42:56 +0530
From: Pintu Agarwal <pintu.ping@...il.com>
To: Ezequiel Garcia <ezequiel@...guardiasur.com.ar>
Cc: Richard Weinberger <richard@....at>,
Kernelnewbies <kernelnewbies@...nelnewbies.org>,
Greg KH <greg@...ah.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-mtd <linux-mtd@...ts.infradead.org>,
Sean Nyekjaer <sean@...nix.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Phillip Lougher <phillip@...ashfs.org.uk>
Subject: Re: MTD: How to get actual image size from MTD partition
Hi All,
On Mon, 30 Aug 2021 at 21:28, Pintu Agarwal <pintu.ping@...il.com> wrote:
>
> On Sun, 22 Aug 2021 at 19:51, Ezequiel Garcia
> <ezequiel@...guardiasur.com.ar> wrote:
>
> > In other words, IMO it's best to expose the NAND through UBI
> > for both read-only and read-write access, using a single UBI device,
> > and then creating UBI volumes as needed. This will allow UBI
> > to spread wear leveling across the whole device, which is expected
> > to increase the flash lifetime.
> >
> > For instance, just as some silly example, you could have something like this:
> >
> > | RootFS SquashFS |
> > | UBI block | UBIFS User R-W area
> > ------------------------------------------------------------------------
> > Kernel A | Kernel B | RootFS A | RootFS B | User
> > ------------------------------------------------------------------------
> > UBIX
> > ------------------------------------------------------------------------
> > /dev/mtdX
> >
> > This setup allows safe kernel and rootfs upgrading. The RootFS is read-only
> > via SquashFS and there's a read-write user area. UBI is supporting all
> > the volumes, handling bad blocks and wear leveling.
> >
> Dear Ezequiel,
> Thank you so much for your reply.
>
> This is exactly what we are also doing :)
> In our system we have a mix of raw and ubi partitions.
> The ubi partitioning is done almost exactly the same way.
> Only for the rootfs (squashfs) I see we were using /mtd/block<id> to
> mount the rootfs.
> Now, I understood we should change it to use /dev/ubiblock<id>
> This might have several benefits, but one most important could be,
> using ubiblock can handle bad-blocks/wear-leveling automatically,
> whereas mtdblocks access the flash directly ?
> I found some references for these..
> So, this seems good for my proposal.
>
> Another thing that is still open for us is:
> How do we calculate the exact image size from a raw mtd partition ?
> For example, support for one of the raw nand partitions, the size is
> defined as 15MB but we flash the actual image of size only 2.5MB.
> So, in the runtime how to determine the image size as ~2.5MB (at least
> roughly) ?
> Is it still possible ?
>
I am happy to inform you that using "ubiblock" for squashfs mounting
seems very helpful for us.
We have seen almost the double performance boost when using ubiblock
for rootfs as well as other read-only volume mounting.
However, we have found few issues while defining the read only volume as STATIC.
With static volume we see that OTA update is failing during "fsync".
That is ota_fsync is failing from here:
https://gerrit.pixelexperience.org/plugins/gitiles/bootable_recovery/+/ff6df890a2a01bf3bf56d3f430b17a5ef69055cf%5E%21/otafault/ota_io.cpp
int status = fsync(fd);
if (status == -1 && errno == EIO)
*
{ have_eio_error = true; }
*
return status;
}
Is this the known issue with static volume?
For now we are using dynamic volume itself but the problem is that
with dynamic volume we cannot get the exact image size from:
$ cat /sys/class/ubi/ubi0_0/data_bytes
==> In case of dynamic volume this will return the total volume size.
==> Thus our md5 integrity check does not match exactly with the
flashed image size.
Is there an alternate way to handle this issue ?
Thanks,
Pintu
Powered by blists - more mailing lists