[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1337082012.2528.181.camel@sauron.fi.intel.com>
Date: Tue, 15 May 2012 14:40:12 +0300
From: Artem Bityutskiy <dedekind1@...il.com>
To: Joel Reardon <joel@...mbassador.com>
Cc: linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2] UBI: add ubi_lnum_purge function to clear work queue
for a lnum
On Tue, 2012-05-15 at 09:44 +0200, Joel Reardon wrote:
> This is the second part of a patch to allow UBI to force the erasure of
> particular logical eraseblock numbers. In this patch, a new function,
> ubi_lnum_purge, is added that allows the caller to synchronously erase all
> unmapped erase blocks whose LEB number corresponds to the parameter. This
> requires a previous patch that stores the LEB number in struct ubi_work.
>
> This was tested by disabling the call to do_work in ubi thread, which results
> in the work queue remaining unless explicitly called to remove. UBIFS was
> changed to call ubifs_leb_change 50 times for three different LEBs. Then the
> new function was called to clear the queue for the three differnt LEB numbers
> one at a time. The work queue was dumped each time and the selective removal
> of the particular LEB numbers was observed.
>
> Signed-off-by: Joel Reardon <reardonj@....ethz.ch>
No objections in general, and I can merge this as soon as you send the
final version. However, for this version I have several notes.
> +/**
> + * ubi_lnum_purge - synchronously erase unmapped PEBs by LEB number.
> + * @ubi_num: UBI device to erase PEBs
> + * @lnum: the LEB number to erase old unmapped PEBs.
> + *
> + * This function is designed to offer a means to ensure that the contents of
> + * old, unmapped LEBs are securely erased without having to flush the entire
> + * work queue of all erase blocks that need erasure. Simply erasing the block
> + * at the time of unmapped is insufficient, as the wear-levelling subsystem
> + * may have already moved the contents. This function navigates the list of
> + * erase blocks that need erasures, and performs an immediate and synchronous
> + * erasure of any erase block that has held data for this particular @lnum.
> + * This may include eraseblocks that held older versions of the same @lnum.
> + * Returns zero in case of success and a negative error code in case of
> + * failure.
> + */
> +int ubi_lnum_purge(int ubi_num, int lnum)
> +{
> + int err;
> + struct ubi_device *ubi;
> +
> + ubi = ubi_get_device(ubi_num);
> + if (!ubi)
> + return -ENODEV;
> +
> + err = ubi_wl_flush_lnum(ubi, lnum);
> + ubi_put_device(ubi);
> + return err;
> +}
> +EXPORT_SYMBOL_GPL(ubi_lnum_purge);
Please, do not introduce a separate exported function for this. Instead,
add "lnum" argument to "ubi_wl_flush". Preserve the old behavior if lnum
is -1. Document this at the header comment. In your case you also need
to call mtd->sync() I think.
> + dbg_wl("flush lnum %d", lnum);
> + list_for_each_entry_safe(wrk, tmp, &ubi->works, list) {
> + if (wrk->lnum == lnum) {
> + down_read(&ubi->work_sem);
> + spin_lock(&ubi->wl_lock);
But you cannot walk the ubi->works list without holding the spinlock.
Any one may add/remove elements to/from this list concurrently.
Take the work_sem at the beginning. Release at the very end.
Then you can do something like this:
int found = 1;
while (found) {
found = 0;
spin_lock(&ubi->wl_lock);
list_for_each_entry(wrk, tmp, &ubi->works, list) {
if (wrk->lnum == lnum) {
list_del(&wrk->list);
ubi->works_count -= 1;
ubi_assert(ubi->works_count >= 0);
spin_unlock(&ubi->wl_lock);
err = wrk->func(ubi, wrk, 0);
if (err)
return err;
spin_lock(&ubi->wl_lock);
found = 1;
break;
}
spin_unlock(&ubi->wl_lock);
}
--
Best Regards,
Artem Bityutskiy
Download attachment "signature.asc" of type "application/pgp-signature" (837 bytes)
Powered by blists - more mailing lists