[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <873469c3za.fsf@bootlin.com>
Date: Wed, 19 Nov 2025 18:43:53 +0100
From: Miquel Raynal <miquel.raynal@...tlin.com>
To: "Michael Walle" <mwalle@...nel.org>
Cc: "Tudor Ambarus" <tudor.ambarus@...aro.org>, "Pratyush Yadav"
<pratyush@...nel.org>, "Richard Weinberger" <richard@....at>, "Vignesh
Raghavendra" <vigneshr@...com>, "Jonathan Corbet" <corbet@....net>,
"Sean Anderson" <sean.anderson@...ux.dev>, "Thomas Petazzoni"
<thomas.petazzoni@...tlin.com>, "Steam Lin" <STLin2@...bond.com>,
<linux-mtd@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<linux-doc@...r.kernel.org>
Subject: Re: [PATCH 15/19] mtd: spi-nor: debugfs: Add locking support
On 19/11/2025 at 11:50:42 +01, "Michael Walle" <mwalle@...nel.org> wrote:
> On Wed Nov 19, 2025 at 10:49 AM CET, Miquel Raynal wrote:
>> Hello,
>>
>> On 18/11/2025 at 13:46:52 +01, "Michael Walle" <mwalle@...nel.org> wrote:
>>
>>> On Fri Nov 14, 2025 at 6:53 PM CET, Miquel Raynal wrote:
>>>> The ioctl output may be counter intuitive in some cases. Asking for a
>>>> "locked status" over a region that is only partially locked will return
>>>> "unlocked" whereas in practice maybe the biggest part is actually
>>>> locked.
>>>>
>>>> Knowing what is the real software locking state through debugfs would be
>>>> very convenient for development/debugging purposes, hence this proposal
>>>> for adding two extra blocks at the end of the file:
>>>> - A "software locked sectors" array which lists every section, if it is
>>>> locked or not, showing both the address ranges and the sizes in numbers
>>>> of blocks.
>>>
>>> I know the file is called software write protection (or swp) but
>>> it's really a hardware write protection, isn't it?
>>
>> Well, it depends on your configuration I guess? Without #WP pin I don't
>> know how to call that. I had in mind that software meant "using the BP
>> pins" and "hardware" meant "toggling #WP". But I have no strong opinion
>> about this wording.
>
> See my previous mail and commit 18d7d01a0a0e ("mtd: spi-nor: Avoid
> setting SRWD bit in SR if WP# signal not connected"). Personally, I
> really don't like the "software" write protection, I mean you can
> just use read-only for that partition or whatever. Locking for me is
> really backed by the hardware. I.e. we use that to be really sure,
> that we have a bootable bootloader and no one can break it.
>
>>>> 64kiB-sectors locking map (x: locked, .: unlocked)
>>>> |.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
>>>> ...........................|
>>>
>>> Maybe put it into an own file. In any case, a sane line wrapping
>>> would be good. And add a leading offset, ie, "0000: xxxx.....".
>>
>> I was unsure about doing that, but yes that makes sense. May I call it
>> "locked_sectors_map"?
>
> I don't have a strong opinion here, but locking might be on a finer
> granularity than sectors. Not with the BP scheme but IIRC I've seen
> locking schemes with 4k blocks for example. So maybe just something
> more general like "locked_erase_blocks_map" or just
> "locked_blocks_map", up to you. It's just debugfs ;)
I find "sector" more generic than "erase block" or even "blocks". A
"sector" meaning is intimately related to what the vendor wants a sector
to be, unlike a block that carries the historical flash-related meaning
of an erase block. I also have the maths definition in mind, which is
basically a start and end.
We can go for locked_blocks_map. Technically speaking, the size of a
block as defined in the BP bits is left to the vendor, it could very
well be any size I guess? So that sounds fine.
>>>> + sr[0] = nor->bouncebuf[0];
>>>> +
>>>> + if (!(nor->flags & SNOR_F_NO_READ_CR)) {
>>>> + ret = spi_nor_read_cr(nor, nor->bouncebuf + 1);
>>>> + if (ret)
>>>> + return ret;
>>>> + }
>>>> +
>>>> + sr[1] = nor->bouncebuf[1];
>>>
>>> Shouldn't that go into the former if conditional? bouncebuf[1] might
>>> never be read.
>>
>> Yes, that's correct. I don't remember why I did it this way, probably a
>> bug, I'll move that line.
>>
>>> Also, until now, reading the "params" debug file never interacts
>>> with the flash, but with this patch it does. We don't do locking
>>> here which looks wrong. Maybe we should just cache the protection
>>> bits. Not sure.
>>
>> I guess caching the status registers makes sense, but we'll still have a
>> possible race when accessing the 2 registers. Is it okay to
>> ignore this very unlikely case in debugfs? Otherwise I might just lock
>> the entire device for the time we access the cached registers.
>
> Oh I meant caching it in the core/swp.c (and invalidating/updating
> when the bits are written) and just display it here. That way we
> just keep that reading this file won't actually trigger any SPI
> xfers.
I understand that but there is still a race:
- swp.c writes cached_st[0]
- debugfs.c reads cached_st[0]
- swp.c writes cached_st[1]
- debugfs.c reads cached_st[1]
debugfs would get old st[0], new st[1]. The presence of the CMP bit in
st[1] really changes what st[0] means.
Questions is, do we care?
If yes, we can probably protect these cached registers inside the spi-nor
device lock.
Thanks,
Miquèl
Powered by blists - more mailing lists