lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <552AA6B7.3070806@nod.at>
Date:	Sun, 12 Apr 2015 19:09:11 +0200
From:	Richard Weinberger <richard@....at>
To:	Boris Brezillon <boris.brezillon@...e-electrons.com>
CC:	linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org,
	dedekind1@...il.com
Subject: Re: [PATCH 4/4] UBI: Implement bitrot checking

Am 12.04.2015 um 19:01 schrieb Boris Brezillon:
> Hi Richard,
> 
> After the 'coding style related'/'useless' comments, now comes a real
> question related to the approach you've taken :-).
> 
> On Sun, 29 Mar 2015 14:13:17 +0200
> Richard Weinberger <richard@....at> wrote:
> 
> [...]
>> +
>> +/**
>> + * ubi_wl_trigger_bitrot_check - triggers a re-read of all physical erase
>> + * blocks.
>> + * @ubi: UBI device description object
>> + */
>> +void ubi_wl_trigger_bitrot_check(struct ubi_device *ubi)
>> +{
>> +	int i;
>> +	struct ubi_wl_entry *e;
>> +
>> +	ubi_msg(ubi, "Running a full read check");
>> +
>> +	for (i = 0; i < ubi->peb_count; i++) {
>> +		spin_lock(&ubi->wl_lock);
>> +		e = ubi->lookuptbl[i];
>> +		spin_unlock(&ubi->wl_lock);
>> +		if (e) {
>> +			atomic_inc(&ubi->bit_rot_work);
>> +			schedule_bitrot_check(ubi, e);
>> +		}
>> +	}
> 
> Do we really need to create a ubi_work per PEB ?
> Couldn't we create a single work being rescheduled inside the worker
> function (after updating the ubi_wl_entry of course).

Currently the UBI worker thread handles one PEB per ubi_work. I didn't wanted
to break that pattern. The downside of that approach is that we need more memory.
A few KiB per run.

I'm not sure if I understood your idea. You mean that we schedule one check for
PEB N and this work will re-schedule again a work for PEB N+1?
Using that approach we can safe memory, yes. But is it worth the hassle?
I'd like to avoid works which schedule again other works.
In the current way it is clear where the work is scheduled and how much.

> I'm pretty sure I'm missing something obvious that you'll probably
> point out ;-).

No no, it is a very good question.

Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ