[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180911014821.26286-7-dima@arista.com>
Date: Tue, 11 Sep 2018 02:48:21 +0100
From: Dmitry Safonov <dima@...sta.com>
To: linux-kernel@...r.kernel.org
Cc: Dmitry Safonov <0x7f454c46@...il.com>,
Dmitry Safonov <dima@...sta.com>,
Daniel Axtens <dja@...ens.net>,
Dmitry Vyukov <dvyukov@...gle.com>,
Michael Neuling <mikey@...ling.org>,
Mikulas Patocka <mpatocka@...hat.com>,
Nathan March <nathan@...net>,
Pasi Kärkkäinen <pasik@....fi>,
Peter Hurley <peter@...leysoftware.com>,
"Rong, Chen" <rong.a.chen@...el.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Tan Xiaojun <tanxiaojun@...wei.com>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jslaby@...e.com>
Subject: [PATCHv3 6/6] tty/ldsem: Decrement wait_readers on timeouted down_read()
It seems like when ldsem_down_read() fails with timeout, it misses
update for sem->wait_readers. By that reason, when writer finally
releases write end of the semaphore __ldsem_wake_readers() does adjust
sem->count with wrong value:
sem->wait_readers * (LDSEM_ACTIVE_BIAS - LDSEM_WAIT_BIAS)
I.e, if update comes with 1 missed wait_readers decrement, sem->count
will be 0x100000001 which means that there is active reader and it'll
make any further writer to fail in acquiring the semaphore.
It looks like, this is a dead-code, because ldsem_down_read() is never
called with timeout different than MAX_SCHEDULE_TIMEOUT, so it might be
worth to delete timeout parameter and error path fall-back..
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Jiri Slaby <jslaby@...e.com>
Signed-off-by: Dmitry Safonov <dima@...sta.com>
---
drivers/tty/tty_ldsem.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/tty/tty_ldsem.c b/drivers/tty/tty_ldsem.c
index 832accbbcb6d..f7966ab7b450 100644
--- a/drivers/tty/tty_ldsem.c
+++ b/drivers/tty/tty_ldsem.c
@@ -237,6 +237,7 @@ down_read_failed(struct ld_semaphore *sem, long count, long timeout)
raw_spin_lock_irq(&sem->wait_lock);
if (waiter.task) {
atomic_long_add_return(-LDSEM_WAIT_BIAS, &sem->count);
+ sem->wait_readers--;
list_del(&waiter.list);
raw_spin_unlock_irq(&sem->wait_lock);
put_task_struct(waiter.task);
--
2.13.6
Powered by blists - more mailing lists