lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <552ACD28.8000409@nod.at>
Date:	Sun, 12 Apr 2015 21:53:12 +0200
From:	Richard Weinberger <richard@....at>
To:	Boris Brezillon <boris.brezillon@...e-electrons.com>
CC:	linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org,
	dedekind1@...il.com
Subject: Re: [PATCH 4/4] UBI: Implement bitrot checking

Am 12.04.2015 um 21:20 schrieb Boris Brezillon:
> Unless I'm missing something, it should be pretty easy to implement:
> adding the following lines at the end of bitrot_check_worker() should do
> the trick
> 
> 	if (e->pnum + 1 < ubi->peb_count) {
> 		wl_wrk->e = ubi->lookuptbl[e->pnum + 1];
> 		__schedule_ubi_work(ubi, wl_wrk);
> 	} else {
> 		atomic_dec(&ubi->bit_rot_work);
> 	}
> 	

It will suffer from the same race issue as my current approach.
While e is scheduled another worker could free it in case of an fatal
error.

>> I'd like to avoid works which schedule again other works.
>> In the current way it is clear where the work is scheduled and how much.
> 
> Yes, but the memory consumption induced by this approach can be pretty
> big on modern NAND chips (on 32 bit platforms, ubi_work is 32 octets
> large, and on modern NANDs you often have 4096 blocks, so a UBI device
> of 4000 block is pretty common => 4000 * 32 = 125 KiB).

While I agree that consuming memory is not very nice I don't think that 125KiB
is a big deal.

> For standard wear leveling requests, using a ubi_work per request is
> sensible since you can't know in advance which block will be queued for
> wear-leveling operation next time.
> In your case, you're scanning all blocks in ascending order, which
> makes it a good candidate for this 'one work for all bitrot checks'
> approach.

The good news is that I have an idea to solve both problems the race and
the memory issue. It should be pretty easy to implement.
Patches will materialize in a few days.

Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ