[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1354273697.30168.87.camel@sauron.fi.intel.com>
Date: Fri, 30 Nov 2012 13:08:17 +0200
From: Artem Bityutskiy <dedekind1@...il.com>
To: Ezequiel Garcia <elezegarcia@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mtd@...ts.infradead.org,
David Woodhouse <dwmw2@...radead.org>,
Tim Bird <tim.bird@...sony.com>,
Michael Opdenacker <michael.opdenacker@...e-electrons.com>
Subject: Re: [RFC/PATCH 0/1] ubi: Add ubiblock driver
Hi, without the reveiw, I can say that overall this sounds good, thanks!
On Tue, 2012-11-20 at 19:39 -0300, Ezequiel Garcia wrote:
> Also, I've decided to make block devices get automatically created for
> each ubi volume present.
> This has been done to match gluebi behavior of automatically create an
> mtd device
> per ubi volume, and to save us the usertool trouble.
>
> The latter is the most important reason: a new usertool means an added
> complexity
> for the user and yet more crap to maintain.
> I don't know how many ubi volumes a user typically creates, but I
> expect not to be too many.
I think I saw something like 8-10 in some peoples' reports.
> * Read/write support
>
> Yes, this implementation supports read/write access.
> It's expected to work fairly well because the request queue at block elevator
> is suppose to order block transfers to be space-effective.
> In other words, it's expected that reads and writes gets ordered
> to point to the same LEB (see Artem's hint at [1]).
>
> To help this and reduce access to the UBI volume, a 1-LEB size
> write-back cache has been implemented (similar to the one at mtdblock.c).
>
> Every read and every write, goes through this cache and the write is only
> done when a request arrives to read or write to a different LEB or when
> the device is released, when the last file handle is closed.
Sounds good, but you should make sure you flush the cache when the
file-system syncs a file. You can consider this as a disk cache.
File-systems usually sends I/O barriers when the disk cache has to be
flushed. I guess this is what you should also do.
> This cache is 1-LEB bytes, vmalloced at open() and freed at release().
Is it per-block device? Then I am not sure it is a good idea to
automatically create them for every volume...
--
Best Regards,
Artem Bityutskiy
Download attachment "signature.asc" of type "application/pgp-signature" (837 bytes)
Powered by blists - more mailing lists