lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 1 Aug 2023 18:54:49 +0200
From:   Miquel Raynal <miquel.raynal@...tlin.com>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Srinivas Kandagatla <srinivas.kandagatla@...aro.org>,
        linux-kernel@...r.kernel.org,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        Robert Marko <robert.marko@...tura.hr>,
        Luka Perkov <luka.perkov@...tura.hr>,
        Michael Walle <michael@...le.cc>,
        Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi Greg,

gregkh@...uxfoundation.org wrote on Tue, 1 Aug 2023 11:56:40 +0200:

> On Mon, Jul 31, 2023 at 05:33:13PM +0200, Miquel Raynal wrote:
> > Hi Greg,
> > 
> > gregkh@...uxfoundation.org wrote on Mon, 17 Jul 2023 18:59:52 +0200:
> >   
> > > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote:  
> > > > Hi Greg,
> > > > 
> > > > gregkh@...uxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200:
> > > >     
> > > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:    
> > > > > > The binary content of nvmem devices is available to the user so in the
> > > > > > easiest cases, finding the content of a cell is rather easy as it is
> > > > > > just a matter of looking at a known and fixed offset. However, nvmem
> > > > > > layouts have been recently introduced to cope with more advanced
> > > > > > situations, where the offset and size of the cells is not known in
> > > > > > advance or is dynamic. When using layouts, more advanced parsers are
> > > > > > used by the kernel in order to give direct access to the content of each
> > > > > > cell, regardless of its position/size in the underlying
> > > > > > device. Unfortunately, these information are not accessible by users,
> > > > > > unless by fully re-implementing the parser logic in userland.
> > > > > > 
> > > > > > Let's expose the cells and their content through sysfs to avoid these
> > > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > > > > > enabled for this support to be available.
> > > > > > 
> > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > > > > > group member will be filled at runtime only when relevant and will
> > > > > > remain empty otherwise. In this case, as the cells attribute group will
> > > > > > be empty, it will not lead to any additional folder/file creation.
> > > > > > 
> > > > > > Exposed cells are read-only. There is, in practice, everything in the
> > > > > > core to support a write path, but as I don't see any need for that, I
> > > > > > prefer to keep the interface simple (and probably safer). The interface
> > > > > > is documented as being in the "testing" state which means we can later
> > > > > > add a write attribute if though relevant.
> > > > > > 
> > > > > > There is one limitation though: if a layout is built as a module but is
> > > > > > not properly installed in the system and loaded manually with insmod
> > > > > > while the nvmem device driver was built-in, the cells won't appear in
> > > > > > sysfs. But if done like that, the cells won't be usable by the built-in
> > > > > > kernel drivers anyway.      
> > > > > 
> > > > > Wait, what?  That should not be an issue here, if so, then this change
> > > > > is not correct and should be fixed as this is NOT an issue for sysfs
> > > > > (otherwise the whole tree wouldn't work.)
> > > > > 
> > > > > Please fix up your dependancies if this is somehow not working properly.    
> > > > 
> > > > I'm not sure I fully get your point.
> > > > 
> > > > There is no way we can describe any dependency between a storage device
> > > > driver and an nvmem layout. NVMEM is a pure software abstraction, the
> > > > layout that will be chosen depends on the device tree, but if the
> > > > layout has not been installed, there is no existing mechanism in
> > > > the kernel to prevent it from being loaded (how do you know it's
> > > > not on purpose?).    
> > > 
> > > Once a layout has been loaded, the sysfs files should show up, right?
> > > Otherwise what does a "layout" do?  (hint, I have no idea, it's an odd
> > > term to me...)  
> > 
> > Sorry for the latency in responding to these questions, I'll try to
> > clarify the situation.
> > 
> > We have:
> > - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which
> >   typically probe and register their devices into the nvmem
> >   layer to expose their content through NVMEM.
> > - each registration in NVMEM leads to the creation of the relevant
> >   NVMEM cells which can then be used by other device drivers
> >   (typically: a network controller retrieving a MAC address from an
> >   EEPROM through the generic NVMEM abstraction).  
> 
> 
> So is a "cell" here a device in the device model?  Or something else?

It is not a device in the device model, but I am wondering if it should
not be one actually. I discussed with Rafal about another issue in the
current design (dependence over a layout driver which might defer
forever a storage device probe) which might be solved if the core was
handling these layouts differently.

> > We recently covered a slightly new case: the NVMEM cells can be in
> > random places in the storage devices so we need a "dynamic" way to
> > discover them: this is the purpose of the NVMEM layouts. We know cell X
> > is in the device, we just don't know where it is exactly at compile
> > time, the layout driver will discover it dynamically for us at runtime.  
> 
> So you then create the needed device when it is found?

We don't create devices, but we match the layouts with the NVMEM
devices thanks to the of_ logic.

> > While the "static cells" parser is built-in the NVMEM subsystem, you
> > explicitly asked to have the layouts modularized. This means
> > registering a storage device in nvmem while no layout driver has been
> > inserted yet is now a scenario. We cannot describe any dependency
> > between a storage device and a layout driver. We cannot defer the probe
> > either because device drivers which don't get access to their NVMEM
> > cell are responsible of choosing what to do (most of the time, the idea
> > is to fallback to a default value to avoid failing the probe for no
> > reason).
> > 
> > So to answer your original question:
> >   
> > > Once a layout has been loaded, the sysfs files should show up, right?  
> > 
> > No. The layouts are kind of "libraries" that the NVMEM subsystem uses
> > to try exposing cells *when* a new device is registered in NVMEM (not
> > later). The registration of an NVMEM layout does not trigger any new
> > parsing, because that is not how the NVMEM subsystem was designed.  
> 
> So they are a type of "class" right?  Why not just use class devices
> then?
> 
> > I must emphasize that if the layout driver is installed in
> > /lib/modules/ there is no problem, it will be loaded with
> > usermodehelper. But if it is not, we can very well have the layout
> > driver inserted after, and this case, while in practice possible, is
> > irrelevant from a driver standpoint. It does not make any sense to have
> > these cells created "after" because they are mostly used during probes.
> > An easy workaround would be to unregister/register again the underlying
> > storage device driver.  
> 
> We really do not support any situation where a module is NOT in the
> proper place when device discovery happens.

Great, I didn't know. Then there is no issue.

>  So this shouldn't be an
> issue, yet you all mention it?  So how is it happening?

Just transparency, I'm giving all details I can.

I'll try to come with something slightly different than what we have
with the current approach.

Thanks,
Miquèl

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ