lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <552AE4E0.1010605@nod.at>
Date:	Sun, 12 Apr 2015 23:34:24 +0200
From:	Richard Weinberger <richard@....at>
To:	Boris Brezillon <boris.brezillon@...e-electrons.com>
CC:	linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org,
	dedekind1@...il.com
Subject: Re: [PATCH 4/4] UBI: Implement bitrot checking

Am 12.04.2015 um 23:24 schrieb Boris Brezillon:
> On Sun, 12 Apr 2015 21:53:12 +0200
> Richard Weinberger <richard@....at> wrote:
> 
>> Am 12.04.2015 um 21:20 schrieb Boris Brezillon:
>>> Unless I'm missing something, it should be pretty easy to implement:
>>> adding the following lines at the end of bitrot_check_worker() should do
>>> the trick
>>>
>>> 	if (e->pnum + 1 < ubi->peb_count) {
>>> 		wl_wrk->e = ubi->lookuptbl[e->pnum + 1];
>>> 		__schedule_ubi_work(ubi, wl_wrk);
>>> 	} else {
>>> 		atomic_dec(&ubi->bit_rot_work);
>>> 	}
>>> 	
>>
>> It will suffer from the same race issue as my current approach.
>> While e is scheduled another worker could free it in case of an fatal
>> error.
> 
> Right, I guess grabing wl_lock before retrieving ->e (and iterating
> over the lookuptbl until it's != NULL) would partly solve the problem.
> But I'm not sure how you would handle this sequence (not sure it can
> happen though):
> 1/ schedule bitrot check on PEB X
> 2/ execute some operation on PEB x that might free PEB X entry in the
>    lookuptbl
> 3/ execute bitrot check on PEB X
> 
> In this case the ->e is invalid (pointing to a freed memory region) at
> the time bitrot check is executed.

We have to make sure that we remove e from the data structure.
This is what UBI currently does for all works.
That way only one work can operate on an ubi_wl_entry.

> Of course, if you're guaranteed that ubi_work are executed in the
> correct order (FIFO) this should never happen, because the scheduled
> operation messing up with lookuptbl entry could have been detected
> before bitrot work insertion.
> 
>>
>>>> I'd like to avoid works which schedule again other works.
>>>> In the current way it is clear where the work is scheduled and how much.
>>>
>>> Yes, but the memory consumption induced by this approach can be pretty
>>> big on modern NAND chips (on 32 bit platforms, ubi_work is 32 octets
>>> large, and on modern NANDs you often have 4096 blocks, so a UBI device
>>> of 4000 block is pretty common => 4000 * 32 = 125 KiB).
>>
>> While I agree that consuming memory is not very nice I don't think that 125KiB
>> is a big deal.
> 
> Hm, a few weeks ago, when I suggested to store information about PEBs in
> order to better choose the next block to be checked for bitrot, one of
> your argument to reject that approach was the memory consumption of
> such a design.
> In my case the only thing I needed was the following structure (one
> instance per PEB):
> 
> struct ubi_peb_statistics {
> 	struct list_head node;
> 	int pnum;
> 	int bitflips;
> 	int last_full_read; /* in seconds */
> 	int last_partial_write; /* in seconds */
> };
> 
> which is 24 bytes large.
> 
> I definitely understand the memory consumption argument, but that's not
> something you can change depending on who's proposing the solution :-).

Yeah, but this structure remains in memory forever, right?
In the bitrot case we allocate the memory only temporary.

That said, my arguments are not perfect nor irreversible,
it can happen that I mess up or was simply wrong.
Just beat me down with my own arguments when I deserve it.

Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ