[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e37980e5-962a-1223-cd46-b793a2caac6c@huawei.com>
Date: Mon, 17 Jan 2022 09:40:27 +0800
From: Zhihao Cheng <chengzhihao1@...wei.com>
To: <richard@....at>, <miquel.raynal@...tlin.com>, <vigneshr@...com>,
<mcoquelin.stm32@...il.com>, <kirill.shutemov@...ux.intel.com>,
<s.hauer@...gutronix.de>
CC: <linux-mtd@...ts.infradead.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 15/15] ubi: fastmap: Fix high cpu usage of ubi_bgt by
making sure wl_pool not empty
Hi Richard,
> Fix it by:
> 1) Adding 2 PEBs reserved for fm_anchor and fm_next_anchor.
> 2) Abandoning filling wl_pool until free count belows beb_rsvd_pebs.
> Then, there are at least 2(EBA_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS -
> MIN_FASTMAP_TAKEN_PEBS[1]) PEBs in pool and 1(WL_RESERVED_PEBS) PEB in
> wl_pool after calling ubi_refill_pools() with all erase works done.
>
> This modification will cause a compatibility problem with old UBI image.
> If UBI volumes take the maximun number of PEBs for one certain UBI device,
> there are no available PEBs to satisfy 2 new reserved PEBs, bad reserved
> PEBs are taken firstly, if still not enough, ENOSPC will returned from ubi
> initialization.
>
Can you come up with a better solution that can be compatible with old
images? In other words, can we solve this problem not by adding new
reserved PEBs?
Powered by blists - more mailing lists