lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240220095038.2betrguygehvwodz@pengutronix.de>
Date: Tue, 20 Feb 2024 10:50:38 +0100
From: Marco Felsch <m.felsch@...gutronix.de>
To: Miquel Raynal <miquel.raynal@...tlin.com>
Cc: Michael Walle <michael@...le.cc>, srinivas.kandagatla@...aro.org,
	gregkh@...uxfoundation.org, rafal@...ecki.pl,
	linux-kernel@...r.kernel.org, kernel@...gutronix.de
Subject: Re: [RFC PATCH] nvmem: core: add sysfs cell write support

Hi Miquel, Michael,

On 24-02-20, Miquel Raynal wrote:
> Hi,
> 
> michael@...le.cc wrote on Mon, 19 Feb 2024 14:26:16 +0100:
> 
> > On Mon Feb 19, 2024 at 12:53 PM CET, Marco Felsch wrote:
> > > On 24-02-19, Miquel Raynal wrote:  
> > > > Hi Marco,
> > > > 
> > > > m.felsch@...gutronix.de wrote on Fri, 16 Feb 2024 11:07:50 +0100:
> > > >   
> > > > > Hi Michael,
> > > > > 
> > > > > On 24-02-16, Michael Walle wrote:  
> > > > > > Hi,
> > > > > > 
> > > > > > On Thu Feb 15, 2024 at 10:14 PM CET, Marco Felsch wrote:    
> > > > > > > @@ -432,6 +466,7 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
> > > > > > >  	struct bin_attribute **cells_attrs, *attrs;
> > > > > > >  	struct nvmem_cell_entry *entry;
> > > > > > >  	unsigned int ncells = 0, i = 0;
> > > > > > > +	umode_t mode;
> > > > > > >  	int ret = 0;
> > > > > > >  
> > > > > > >  	mutex_lock(&nvmem_mutex);
> > > > > > > @@ -456,15 +491,18 @@ static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
> > > > > > >  		goto unlock_mutex;
> > > > > > >  	}
> > > > > > >  
> > > > > > > +	mode = nvmem_bin_attr_get_umode(nvmem);
> > > > > > > +
> > > > > > >  	/* Initialize each attribute to take the name and size of the cell */
> > > > > > >  	list_for_each_entry(entry, &nvmem->cells, node) {
> > > > > > >  		sysfs_bin_attr_init(&attrs[i]);
> > > > > > >  		attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL,
> > > > > > >  						    "%s@%x", entry->name,
> > > > > > >  						    entry->offset);
> > > > > > > -		attrs[i].attr.mode = 0444;    
> > > > > > 
> > > > > > cells are not writable if there is a read post process hook, see
> > > > > > __nvmem_cell_entry_write().
> > > > > > 
> > > > > > if (entry->read_post_processing)
> > > > > > 	mode &= ~0222;    
> > > > > 
> > > > > good point, thanks for the hint :) I will add this and send a non-rfc
> > > > > version if write-support is something you would like to have.  
> > > > 
> > > > I like the idea but, what about mtd devices (and soon maybe UBI
> > > > devices)? This may only work on EEPROM-like devices I guess, where each
> > > > area is fully independent and where no erasure is actually expected.  
> > >
> > > For MTD I would say that you need to ensure that you need to align the
> > > cells correctly. The cell-write should handle the page erase/write cycle
> > > properly. E.g. an SPI-NOR need to align the cells to erase-page size or
> > > the nvmem-cell-write need to read-copy-update the cells if they are not
> > > erase-paged aligned.
> > >
> > > Regarding UBI(FS) I'm not sure if this is required at all since you have
> > > an filesystem. IMHO nvmem-cells are very lowelevel and are not made for
> > > filesystem backed backends.
> 
> I'm really talking about UBI, not UBIFS. UBI is just like MTD but
> handles wear leveling. There is a pending series for enabling nvmem
> cells on top of UBI.

Cells on-top of a wear leveling device? Interesting, the cell-api is
very lowlevel which means the specified cell will be at the exact same
place on the hardware device as specified in the dts. How do you know
that with wear leveling underneath the cell-api?

> > > That beeing said: I have no problem if we provide write support for
> > > EEPROMs only and adapt it later on to cover spi-nor/nand devices as
> > > well.  
> > 
> > Agreed. Honestly, I don't know how much sense this makes for MTD
> > devices. First, the operation itself, seems really dangerous, as
> > you'll have to delete the whole sector. Second, during initial
> > provisioning, I don't think it will make much sense to use the sysfs
> > cells because you cannot combine multiple writes into one. You'll
> > always end up with unnecessary erases.
> 
> One cell per erase block would be an immense waste.

Agree.

> Read-copy-update would probably work but would as well be very
> sub-optimal. I guess we could live with it, but as for now there has
> not been any real request for it, I'd also advise to keep this feature
> out of the mtd world in general.

SPI-NORs are very typical for storing production-data as well but as I
said this is another story. I'm fine with limiting it to EEPROMs since
this is my use-case :)

Regards,
  Marco

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ