[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2023072106-partly-thank-8657@gregkh>
Date: Fri, 21 Jul 2023 13:39:18 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Daniel Golle <daniel@...rotopia.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Jens Axboe <axboe@...nel.dk>,
Ulf Hansson <ulf.hansson@...aro.org>,
Miquel Raynal <miquel.raynal@...tlin.com>,
Richard Weinberger <richard@....at>,
Vignesh Raghavendra <vigneshr@...com>,
Dave Chinner <dchinner@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Thomas Weißschuh <linux@...ssschuh.net>,
Jan Kara <jack@...e.cz>, Damien Le Moal <dlemoal@...nel.org>,
Ming Lei <ming.lei@...hat.com>, Min Li <min15.li@...sung.com>,
Christian Loehle <CLoehle@...erstone.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Hannes Reinecke <hare@...e.de>,
Jack Wang <jinpu.wang@...os.com>,
Florian Fainelli <f.fainelli@...il.com>,
Yeqi Fu <asuk4.q@...il.com>, Avri Altman <avri.altman@....com>,
Hans de Goede <hdegoede@...hat.com>,
Ye Bin <yebin10@...wei.com>,
Rafał Miłecki <rafal@...ecki.pl>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mmc@...r.kernel.org, linux-mtd@...ts.infradead.org
Subject: Re: [RFC PATCH 6/6] block: implement NVMEM provider
On Fri, Jul 21, 2023 at 12:30:10PM +0100, Daniel Golle wrote:
> On Fri, Jul 21, 2023 at 01:11:40PM +0200, Greg Kroah-Hartman wrote:
> > On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> > > On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > > > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > > > The layering here is exactly the wrong way around. This block device
> > > > > > as nvmem provide has not business sitting in the block layer and being
> > > > > > keyed ff the gendisk registration. Instead you should create a new
> > > > > > nvmem backed that opens the block device as needed if it fits your
> > > > > > OF description without any changes to the core block layer.
> > > > > >
> > > > >
> > > > > Ok. I will use a class_interface instead.
> > > >
> > > > I'm not sure a class_interface makes much sense here. Why does the
> > > > block layer even need to know about you using a device a nvmem provider?
> > >
> > > It doesn't. But it has to notify the nvmem providing driver about the
> > > addition of new block devices. This is what I'm using class_interface
> > > for, simply to hook into .add_dev of the block_class.
> >
> > Why is this single type of block device special to require this, yet all
> > others do not? Encoding this into the block layer feels like a huge
> > layering violation to me, why not do it how all other block drivers do
> > it instead?
>
> I was thinkng of this as a generic solution in no way tied to one specific
> type of block device. *Any* internal block device which can be used to
> boot from should also be usable as NVMEM provider imho.
Define "internal" :)
And that's all up to the boot process in userspace, the kernel doesn't
care about this.
> > > > As far as I can tell your provider should layer entirely above the
> > > > block layer and not have to be integrated with it.
> > >
> > > My approach using class_interface doesn't require any changes to be
> > > made to existing block code. However, it does use block_class. If
> > > you see any other good option to implement matching off and usage of
> > > block devices by in-kernel users, please let me know.
> >
> > Do not use block_class, again, that should only be for the block core to
> > touch. Individual block drivers should never be poking around in it.
>
> Do I have any other options to coldplug and be notified about newly
> added block devices, so the block-device-consuming driver can know
> about them?
What other options do you need?
> This is not a rhetoric question, I've been looking for other ways
> and haven't found anything better than class_find_device or
> class_interface.
Never use that, sorry, that's not for a driver to touch.
> Using those also prevents blk-nvmem to be built as
> a module, so I'd really like to find alternatives.
> E.g. for MTD we got struct mtd_notifier and register_mtd_user().
Your storage/hardware driver should be the thing that "finds block
devices" and registers them with the block class core, right? After
that, what matters?
confused,
greg k-h
Powered by blists - more mailing lists