[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20191107101016.137186-1-pihsun@chromium.org>
Date: Thu, 7 Nov 2019 18:10:14 +0800
From: Pi-Hsun Shih <pihsun@...omium.org>
To: unlisted-recipients:; (no To-header on input)
Cc: Pi-Hsun Shih <pihsun@...omium.org>,
Tomasz Figa <tfiga@...omium.org>,
Mauro Carvalho Chehab <mchehab@...nel.org>,
Hans Verkuil <hverkuil-cisco@...all.nl>,
Ezequiel Garcia <ezequiel@...labora.com>,
Paul Kocialkowski <paul.kocialkowski@...tlin.com>,
Boris Brezillon <boris.brezillon@...labora.com>,
Philipp Zabel <p.zabel@...gutronix.de>,
Ricardo Ribalda Delgado <ribalda@...nel.org>,
Pawel Osciak <posciak@...omium.org>,
Thomas Gleixner <tglx@...utronix.de>,
sumitg <sumitg@...dia.com>,
linux-media@...r.kernel.org (open list:MEDIA INPUT INFRASTRUCTURE
(V4L/DVB)), linux-kernel@...r.kernel.org (open list)
Subject: [PATCH] media: v4l2-ctrl: Lock main_hdl on operations of requests_queued.
There's a race condition between the list_del_init in the
v4l2_ctrl_request_complete, and the list_add_tail in the
v4l2_ctrl_request_queue, since they can be called in different thread
and the requests_queued list is not protected by a lock. This can lead
to that the v4l2_ctrl_handler is still in the requests_queued list while
the request_is_queued is already set to false, which would cause
use-after-free if the v4l2_ctrl_handler is later released.
Fix this by locking the ->lock of main_hdl (which is the owner of the
requests_queued list) when doing list operations on the
->requests_queued list.
Signed-off-by: Pi-Hsun Shih <pihsun@...omium.org>
---
drivers/media/v4l2-core/v4l2-ctrls.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
index b4caf2d4d076..22e6c82d58b9 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
@@ -3301,6 +3301,7 @@ static void v4l2_ctrl_request_queue(struct media_request_object *obj)
struct v4l2_ctrl_handler *prev_hdl = NULL;
struct v4l2_ctrl_ref *ref_ctrl, *ref_ctrl_prev = NULL;
+ mutex_lock(main_hdl->lock);
if (list_empty(&main_hdl->requests_queued))
goto queue;
@@ -3332,18 +3333,22 @@ static void v4l2_ctrl_request_queue(struct media_request_object *obj)
queue:
list_add_tail(&hdl->requests_queued, &main_hdl->requests_queued);
hdl->request_is_queued = true;
+ mutex_unlock(main_hdl->lock);
}
static void v4l2_ctrl_request_unbind(struct media_request_object *obj)
{
struct v4l2_ctrl_handler *hdl =
container_of(obj, struct v4l2_ctrl_handler, req_obj);
+ struct v4l2_ctrl_handler *main_hdl = obj->priv;
list_del_init(&hdl->requests);
+ mutex_lock(main_hdl->lock);
if (hdl->request_is_queued) {
list_del_init(&hdl->requests_queued);
hdl->request_is_queued = false;
}
+ mutex_unlock(main_hdl->lock);
}
static void v4l2_ctrl_request_release(struct media_request_object *obj)
@@ -4297,9 +4302,11 @@ void v4l2_ctrl_request_complete(struct media_request *req,
v4l2_ctrl_unlock(ctrl);
}
+ mutex_lock(main_hdl->lock);
WARN_ON(!hdl->request_is_queued);
list_del_init(&hdl->requests_queued);
hdl->request_is_queued = false;
+ mutex_unlock(main_hdl->lock);
media_request_object_complete(obj);
media_request_object_put(obj);
}
base-commit: dcd34bd234181ec74f081c7d0025204afe6b213e
--
2.24.0.rc1.363.gb1bccd3e3d-goog
Powered by blists - more mailing lists