lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 07 Apr 2014 18:42:02 +0200
From:	Richard Weinberger <richard@....at>
To:	Tanya Brokhman <tlinder@...eaurora.org>
CC:	Artem Bityutskiy <dedekind1@...il.com>,
	"linux-mtd@...ts.infradead.org" <linux-mtd@...ts.infradead.org>,
	open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC/PATCH] mtd: ubi: Free peb's synchronously for fastmap

Am 07.04.2014 18:05, schrieb Tanya Brokhman:
> On 4/7/2014 4:02 PM, Richard Weinberger wrote:
>> On Tue, Apr 1, 2014 at 10:01 AM, Tanya Brokhman <tlinder@...eaurora.org> wrote:
>>> At first mount it's possible that there are not enough free PEBs since
>>> there are PEB's pending to be erased. In such scenario, fm_pool (which is
>>> the pool from which user required PEBs are allocated) will be empty.
>>> Try fixing the above described situation by synchronously performing
>>> pending erase work, thus produce another free PEB.
>>>
>>> Signed-off-by: Tatyana Brokhman <tlinder@...eaurora.org>
>>>
>>> diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
>>> index 457ead3..9a36f78 100644
>>> --- a/drivers/mtd/ubi/wl.c
>>> +++ b/drivers/mtd/ubi/wl.c
>>> @@ -595,10 +595,29 @@ static void refill_wl_pool(struct ubi_device *ubi)
>>>   static void refill_wl_user_pool(struct ubi_device *ubi)
>>>   {
>>>          struct ubi_fm_pool *pool = &ubi->fm_pool;
>>> +       int err;
>>>
>>>          return_unused_pool_pebs(ubi, pool);
>>>
>>>          for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
>>> +retry:
>>> +               if (!ubi->free.rb_node ||
>>> +                  (ubi->free_count - ubi->beb_rsvd_pebs < 1)) {
>>> +                       /*
>>> +                        * There are no available PEBs. Try to free
>>> +                        * PEB by means of synchronous execution of
>>> +                        * pending works.
>>> +                        */
>>> +                       if (ubi->works_count == 0)
>>> +                               break;
>>> +                       spin_unlock(&ubi->wl_lock);
>>> +                       err = do_work(ubi);
>>> +                       spin_lock(&ubi->wl_lock);
>>
>> This is basically what produce_free_peb() does.
> 
> Right. I didn't use t just because of the termination condition. produce_free_peb stops if there is 1 free peb. I need more then 1
> 
>>
>>> +                       if (err < 0)
>>> +                               break;
>>> +                       goto retry;
>>> +               }
>>> +
>>>                  pool->pebs[pool->size] = __wl_get_peb(ubi);
>>
>> __wl_get_peb() already calls produce_free_peb() when we run out of free PEBs.
>>
>> Does your patch really fix a problem you encounter or did you find the
>> issue by reviewing
>> the code?
>>
> 
> Yes. We encountered this issue, as described in the commit message. This is the fix. Verified and working for us.

Wouldn't it be better to fix produce_free_pep() instead of duplicating it?
I.e. Such that you can tell it how many PEBs you need.

Thanks,
//richard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ