[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200511074157.GA1361393@kroah.com>
Date: Mon, 11 May 2020 09:41:57 +0200
From: Greg KH <gregkh@...uxfoundation.org>
To: rananta@...eaurora.org
Cc: jslaby@...e.com, andrew@...nix.com, linuxppc-dev@...ts.ozlabs.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] tty: hvc: Fix data abort due to race in hvc_open
On Mon, May 11, 2020 at 12:34:44AM -0700, rananta@...eaurora.org wrote:
> On 2020-05-11 00:23, rananta@...eaurora.org wrote:
> > On 2020-05-09 23:48, Greg KH wrote:
> > > On Sat, May 09, 2020 at 06:30:56PM -0700, rananta@...eaurora.org
> > > wrote:
> > > > On 2020-05-06 02:48, Greg KH wrote:
> > > > > On Mon, Apr 27, 2020 at 08:26:01PM -0700, Raghavendra Rao Ananta wrote:
> > > > > > Potentially, hvc_open() can be called in parallel when two tasks calls
> > > > > > open() on /dev/hvcX. In such a scenario, if the
> > > > > > hp->ops->notifier_add()
> > > > > > callback in the function fails, where it sets the tty->driver_data to
> > > > > > NULL, the parallel hvc_open() can see this NULL and cause a memory
> > > > > > abort.
> > > > > > Hence, serialize hvc_open and check if tty->private_data is NULL
> > > > > > before
> > > > > > proceeding ahead.
> > > > > >
> > > > > > The issue can be easily reproduced by launching two tasks
> > > > > > simultaneously
> > > > > > that does nothing but open() and close() on /dev/hvcX.
> > > > > > For example:
> > > > > > $ ./simple_open_close /dev/hvc0 & ./simple_open_close /dev/hvc0 &
> > > > > >
> > > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@...eaurora.org>
> > > > > > ---
> > > > > > drivers/tty/hvc/hvc_console.c | 16 ++++++++++++++--
> > > > > > 1 file changed, 14 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/tty/hvc/hvc_console.c
> > > > > > b/drivers/tty/hvc/hvc_console.c
> > > > > > index 436cc51c92c3..ebe26fe5ac09 100644
> > > > > > --- a/drivers/tty/hvc/hvc_console.c
> > > > > > +++ b/drivers/tty/hvc/hvc_console.c
> > > > > > @@ -75,6 +75,8 @@ static LIST_HEAD(hvc_structs);
> > > > > > */
> > > > > > static DEFINE_MUTEX(hvc_structs_mutex);
> > > > > >
> > > > > > +/* Mutex to serialize hvc_open */
> > > > > > +static DEFINE_MUTEX(hvc_open_mutex);
> > > > > > /*
> > > > > > * This value is used to assign a tty->index value to a hvc_struct
> > > > > > based
> > > > > > * upon order of exposure via hvc_probe(), when we can not match it
> > > > > > to
> > > > > > @@ -346,16 +348,24 @@ static int hvc_install(struct tty_driver
> > > > > > *driver, struct tty_struct *tty)
> > > > > > */
> > > > > > static int hvc_open(struct tty_struct *tty, struct file * filp)
> > > > > > {
> > > > > > - struct hvc_struct *hp = tty->driver_data;
> > > > > > + struct hvc_struct *hp;
> > > > > > unsigned long flags;
> > > > > > int rc = 0;
> > > > > >
> > > > > > + mutex_lock(&hvc_open_mutex);
> > > > > > +
> > > > > > + hp = tty->driver_data;
> > > > > > + if (!hp) {
> > > > > > + rc = -EIO;
> > > > > > + goto out;
> > > > > > + }
> > > > > > +
> > > > > > spin_lock_irqsave(&hp->port.lock, flags);
> > > > > > /* Check and then increment for fast path open. */
> > > > > > if (hp->port.count++ > 0) {
> > > > > > spin_unlock_irqrestore(&hp->port.lock, flags);
> > > > > > hvc_kick();
> > > > > > - return 0;
> > > > > > + goto out;
> > > > > > } /* else count == 0 */
> > > > > > spin_unlock_irqrestore(&hp->port.lock, flags);
> > > > >
> > > > > Wait, why isn't this driver just calling tty_port_open() instead of
> > > > > trying to open-code all of this?
> > > > >
> > > > > Keeping a single mutext for open will not protect it from close, it will
> > > > > just slow things down a bit. There should already be a tty lock held by
> > > > > the tty core for open() to keep it from racing things, right?
> > > > The tty lock should have been held, but not likely across
> > > > ->install() and
> > > > ->open() callbacks, thus resulting in a race between
> > > > hvc_install() and
> > > > hvc_open(),
> > >
> > > How? The tty lock is held in install, and should not conflict with
> > > open(), otherwise, we would be seeing this happen in all tty drivers,
> > > right?
> > >
> > Well, I was expecting the same, but IIRC, I see that the open() was
> > being
> > called in parallel for the same device node.
> >
> > Is it expected that the tty core would allow only one thread to
> > access the dev-node, while blocking the other, or is it the client
> > driver's responsibility to handle the exclusiveness?
> Or is there any optimization going on where the second call doesn't go
> through
> install(), but calls open() directly as the file was already opened by the
> first
> thread?
Yes, it should only happen once, look at the logic in tty_kopen().
greg k-h
Powered by blists - more mailing lists