lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <17c0f7e8-5e16-41e1-9b11-a6fa00169856@iwave-global.com>
Date: Wed, 31 Jul 2024 15:23:00 +0530
From: Nikhil Kashyap H R <nikhil.kashyap@...ve-global.com>
To: linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Investigation Request - Strange PEB Reservation Behavior in UBI

Dear Team,
I am writing to request an investigation into a strange issue we have 
observed regarding PEB reservation for bad block management in UBI. For 
context, our system uses MT29FxxG NAND flash chips with a minimum of 
4016 valid blocks (NVB) per LUN out of a total of 4096 blocks.
We have noticed that when using the CONFIG_MTD_UBI_BEB_LIMIT parameter, 
which is typically calculated as 1024 * (1 - MinNVB / TotalNVB) = 20 
PEBs, UBI is reserving significantly more PEBs than expected. Instead of 
the expected 20 PEBs, UBI is reserving around 160 PEBs per LUN, which is 
approximately 8 times more than it should be.
To work around this issue, we have set the CONFIG_MTD_UBI_BEB_LIMIT 
parameter to 3, which corresponds to ~91 reserved PEBs per LUN. However, 
this strange 8x multiplier effect is concerning and requires further 
investigation. Additionally, we have observed crashes in our system and 
suspect that the over-reservation of PEBs for bad block handling may be 
related to these issues.
We would like to understand the root cause of the crashes and how the 
excessive PEB reservation might be contributing to the problem. We have 
some similar questions related to the PEB's usage in UBI operations:
Why is UBI reserving significantly more PEBs for bad block handling than 
expected when using the CONFIG_MTD_UBI_BEB_LIMIT parameter?
1A) The typical calculation suggests reserving 20 PEBs, but UBI is 
reserving 8 times more, around 160 PEBs per LUN. What is causing this 8x 
multiplier effect?
Does the issue of over-reserving PEBs only occur when multiple NAND 
partitions are grouped under the same parent MTD device, as is the case 
with the custom driver? Or can it also happen with a single NAND 
partition per MTD device?
Is the over-reservation of PEBs for bad block handling related to the 
crashes observed in the system? If so, what is the root cause of the 
crashes and how does the excessive PEB reservation contribute to the issue?
What is the expected behavior of UBI when reserving PEBs for bad block 
management based on the CONFIG_MTD_UBI_BEB_LIMIT parameter? Why is UBI 
not following the typical calculation in this case?
Are there any known bugs, issues or unexpected behaviors in the UBI 
subsystem or NAND flash drivers that could explain the observed PEB 
reservation problem? If so, are there any workarounds or fixes available?
We would greatly appreciate if you could investigate this issue and 
provide us with your findings and recommendations.
Thank you for your assistance.
Best regards,
Nikhil Kashyap H R


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ