[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f91d7eff1d764ba7b47f023bc0fafacb@intel.com>
Date: Thu, 28 Jan 2021 08:53:43 +0000
From: "Winkler, Tomas" <tomas.winkler@...el.com>
To: Richard Weinberger <richard@....at>
CC: Miquel Raynal <miquel.raynal@...tlin.com>,
Vignesh Raghavendra <vigneshr@...com>,
linux-mtd <linux-mtd@...ts.infradead.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] mtd: use refcount to prevent corruption
> Tomas,
>
> ----- Ursprüngliche Mail -----
> >> >> Can you please explain a little more what devices are involved?
> >> >> Does it implement _get_device() and _put_device()?
> >> > No this is not connected to those handlers of the underlying device
> >> > and those won't help.
> >> > I have a spi device provided by MFD framework so it can go away anytime.
> >>
> >> Can it go away physically or just in software?
> > Software, but since this is mfd it's basically hotplug. The kernel is
> > crashing when I simulate hardware failure.
> >>
> >> Usually the pattern is that you make sure in the device driver that
> >> nobody can orphan the MTD while it is in use.
> >> e.g. drivers/mtd/ubi/gluebi.c does so. In _get_device() it grabs a
> >> reference on the underlying UBI volume to make sure it cannot go away
> >> while the MTD (on top of UBI) is in use.
> >
> > I can try that if it helps, because we are simulating possible lower
> > level crash.
> > In an case I believe that the proper refcouting is much more robust
> > solution, than the current one.
> > I'd appreciate if someone can review the actual implementation.
>
> This happens right now, I try to understand why exactly the current way is not
> good in enough. :-)
>
> Your approach makes sure that the MTD itself does not go away while it has
> users but how does this help in the case where the underlying MFD just
> vanishes?
> The MTD can be in use and the MFD can go away while e.g. mtd_read() or such
> takes place.
Read will fail, but kernel won't crash on access to memory that was freed.
Thanks
Tomas
Powered by blists - more mailing lists