[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191006172018.480360174@linuxfoundation.org>
Date: Sun, 6 Oct 2019 19:21:17 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Martijn Coenen <maco@...roid.com>,
syzbot <syzkaller@...glegroups.com>,
Mattias Nissler <mnissler@...omium.org>
Subject: [PATCH 4.9 30/47] ANDROID: binder: remove waitqueue when thread exits.
From: Martijn Coenen <maco@...roid.com>
commit f5cb779ba16334b45ba8946d6bfa6d9834d1527f upstream.
binder_poll() passes the thread->wait waitqueue that
can be slept on for work. When a thread that uses
epoll explicitly exits using BINDER_THREAD_EXIT,
the waitqueue is freed, but it is never removed
from the corresponding epoll data structure. When
the process subsequently exits, the epoll cleanup
code tries to access the waitlist, which results in
a use-after-free.
Prevent this by using POLLFREE when the thread exits.
Signed-off-by: Martijn Coenen <maco@...roid.com>
Reported-by: syzbot <syzkaller@...glegroups.com>
Cc: stable <stable@...r.kernel.org> # 4.14
[backport BINDER_LOOPER_STATE_POLL logic as well]
Signed-off-by: Mattias Nissler <mnissler@...omium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/android/binder.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -334,7 +334,8 @@ enum {
BINDER_LOOPER_STATE_EXITED = 0x04,
BINDER_LOOPER_STATE_INVALID = 0x08,
BINDER_LOOPER_STATE_WAITING = 0x10,
- BINDER_LOOPER_STATE_NEED_RETURN = 0x20
+ BINDER_LOOPER_STATE_NEED_RETURN = 0x20,
+ BINDER_LOOPER_STATE_POLL = 0x40,
};
struct binder_thread {
@@ -2628,6 +2629,18 @@ static int binder_free_thread(struct bin
} else
BUG();
}
+
+ /*
+ * If this thread used poll, make sure we remove the waitqueue
+ * from any epoll data structures holding it with POLLFREE.
+ * waitqueue_active() is safe to use here because we're holding
+ * the inner lock.
+ */
+ if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&
+ waitqueue_active(&thread->wait)) {
+ wake_up_poll(&thread->wait, POLLHUP | POLLFREE);
+ }
+
if (send_reply)
binder_send_failed_reply(send_reply, BR_DEAD_REPLY);
binder_release_work(&thread->todo);
@@ -2651,6 +2664,8 @@ static unsigned int binder_poll(struct f
return POLLERR;
}
+ thread->looper |= BINDER_LOOPER_STATE_POLL;
+
wait_for_proc_work = thread->transaction_stack == NULL &&
list_empty(&thread->todo) && thread->return_error == BR_OK;
Powered by blists - more mailing lists