lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALF0-+XBqaWFxHbnbOyaYW01XAjm78CrdxkyyjMKSexyvVDV-w@mail.gmail.com>
Date:	Wed, 21 Nov 2012 07:42:24 -0300
From:	Ezequiel Garcia <elezegarcia@...il.com>
To:	Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mtd@...ts.infradead.org,
	Artem Bityutskiy <dedekind1@...il.com>,
	David Woodhouse <dwmw2@...radead.org>,
	Tim Bird <tim.bird@...sony.com>,
	Michael Opdenacker <michael.opdenacker@...e-electrons.com>
Subject: Re: [RFC/PATCH 0/1] ubi: Add ubiblock driver

Hi Thomas,

On Wed, Nov 21, 2012 at 7:00 AM, Thomas Petazzoni
<thomas.petazzoni@...e-electrons.com> wrote:
> Dear Ezequiel Garcia,
>
> On Tue, 20 Nov 2012 19:39:38 -0300, Ezequiel Garcia wrote:
>
>> * Read/write support
>>
>> Yes, this implementation supports read/write access.
>
> While I think the original ubiblk that was read-only made sense to
> allow the usage of read-only filesystems like squashfs, I am not sure a
> read/write ubiblock is useful.
>
> Using a standard block read/write filesystem on top of ubiblock is going
> to cause damage to your flash. Even though UBI does wear-leveling, your
> standard block read/write filesystem will think it has 512 bytes block
> below him, and will do a crazy number of writes to small blocks. Even
> though you have a one LEB cache, it is going to be defeated quite
> strongly by the small random I/O of the read/write filesystem.
>

Well, I was hoping for the opposite to happen;
and hoping for the 1-LEB cache to be able to absorb
the multiple write from filesystems.

My line of reasoning is as follows.
As we all know, LEBs are much much bigger than regular disk blocks;
typically 128KiB.

Now, filesystems won't care at all about wear levelling
and thus will carelessly perform lots of reads/writes at any disk sector.

Because block elevator will try to minimize seek time, it will try to order
block requests to be contiguous. Since LEBs are much bigger than sector
blocks, this ordering will result mostly in the same LEB being addressed.

Only when a read or write arrives at a different LEB than the one in cache,
will ubiblock flush it to disk.

My **very limited** testing scenario with ext2, showed this was more
or less like this.
Next time, I'll post some benchmarks and some numbers.

Of course, there's a possibility you are right and ubiblock write support
is completely useless.

Thanks for the review,

    Ezequiel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ