[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DB3PR0402MB39160B49C7CDD99F546E98DCF5D20@DB3PR0402MB3916.eurprd04.prod.outlook.com>
Date: Tue, 13 Aug 2019 09:22:39 +0000
From: Anson Huang <anson.huang@....com>
To: Alexandre Belloni <alexandre.belloni@...tlin.com>,
Trent Piepho <tpiepho@...inj.com>
CC: "linux-rtc@...r.kernel.org" <linux-rtc@...r.kernel.org>,
"a.zummo@...ertech.it" <a.zummo@...ertech.it>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Aisheng Dong <aisheng.dong@....com>,
dl-linux-imx <linux-imx@....com>
Subject: RE: [PATCH] rtc: snvs: fix possible race condition
Hi, Alexandre
> On 19/07/2019 19:04:21+0000, Trent Piepho wrote:
> > On Fri, 2019-07-19 at 02:57 +0000, Anson Huang wrote:
> > >
> > > > I do worry that handling the irq before the rtc device is
> > > > registered could still result in a crash. From what I saw, the
> > > > irq path in snvs only uses driver state members that are fully
> > > > initialized for the most part, and the allocated but unregistered
> > > > data->rtc is only used in one call to rtc_update_irq(), which appears to
> be ok with this.
> > > >
> > > > But it is not that hard to imagine that something could go into
> > > > the rtc core that assumes call like rtc_update_irq() are only made on
> registered devices.
> > > >
> > > > If there was a way to do it, I think allocating the irq in a
> > > > masked state and then unmasking it as part of the final
> > > > registration call to make the device go live would be a safer and more
> general pattern.
> > >
> > > It makes sense, I think we can just move the devm_request_irq() to
> > > after rtc_register_device(), It will make sure everything is ready before
> IRQ is enabled. Will send out a V2 patch.
> >
> > That will mean registering the rtc, then unregistering it if the irq
> > request fails. More of a pain to write this failure path.
> >
> > Alexandre, is it part of rtc core design that rtc_update_irq() might
> > be called on a rtc device that is properly allocated, but not
> > registered yet?
>
> Yes, the main reason of the change of API was exactly to handle this.
What about this patch's status? Should we continue or any change needed?
https://patchwork.ozlabs.org/patch/1132481/
Thanks,
Anson
Powered by blists - more mailing lists