[<prev] [next>] [day] [month] [year] [list]
Message-ID: <Yw0Bz9+ukNAf2Hpk@google.com>
Date: Mon, 29 Aug 2022 11:13:35 -0700
From: Dmitry Torokhov <dmitry.torokhov@...il.com>
To: Hillf Danton <hdanton@...a.com>
Cc: Greg Tulli <greg.iforce@...il.com>,
Randy Dunlap <rdunlap@...radead.org>,
linux-kernel@...r.kernel.org, linux-input@...r.kernel.org,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Subject: Re: [Question] iforce_serio lockup
On Sun, Aug 28, 2022 at 05:05:09PM +0800, Hillf Danton wrote:
> On Fri, 29 Jul 2022 21:24:13 +0200 Greg Tulli <greg.iforce@...il.com>
> >
> > Chain exists of:
> > &iforce->xmit_lock --> &port_lock_key --> &serport->lock
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(&serport->lock);
> > lock(&port_lock_key);
> > lock(&serport->lock);
> > lock(&iforce->xmit_lock);
> >
> > *** DEADLOCK ***
> >
> > 2022-07-11 11:32 GMT+02:00, Greg T <greg.iforce@...il.com>:
> >
> > > That problem vanishes if we don't call iforce_process_packet directly
> > > from iforce_serio_irq, but from a tasklet. Is that a right approach?
>
> Another option is to do wakeup without serport->lock held,
> given iforce->xmit_lock. Your test will provide info about the races
> that may come up due to the serport->lock.
No, I think we should use work to process the outbound buffer/queue,
as the comment in include/linux/tty_ldisc.h recommends. I believe a
single work instance in struct iforce_serio will suffice:
iforce_serio_xmit already should be able to handle concurrent
invocations, so we just need to schedule the work from
iforce_serio_write_wakeup() (and it is fine if it is already scheduled)
and let iforce_serio_xmit() do its job.
We can wait for the buffer to empty (which should take care of the work
running, but we may also do cancel_work_sync() for a good measure) in
iforce_serio_stop_io().
Thanks.
--
Dmitry
Powered by blists - more mailing lists