[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160503000511.010792890@linuxfoundation.org>
Date: Mon, 2 May 2016 17:11:44 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Lyude <cpaul@...hat.com>,
Dave Airlie <airlied@...hat.com>
Subject: [PATCH 4.4 076/163] drm/dp/mst: Get validated port ref in drm_dp_update_payload_part1()
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: cpaul@...hat.com <cpaul@...hat.com>
commit 263efde31f97c498e1ebad30e4d2906609d7ad6b upstream.
We can thank KASAN for finding this, otherwise I probably would have spent
hours on it. This fixes a somewhat harder to trigger kernel panic, occuring
while enabling MST where the port we were currently updating the payload on
would have all of it's refs dropped before we finished what we were doing:
==================================================================
BUG: KASAN: use-after-free in drm_dp_update_payload_part1+0xb3f/0xdb0 [drm_kms_helper] at addr ffff8800d29de018
Read of size 4 by task Xorg/973
=============================================================================
BUG kmalloc-2048 (Tainted: G B W ): kasan: bad access detected
-----------------------------------------------------------------------------
INFO: Allocated in drm_dp_add_port+0x1aa/0x1ed0 [drm_kms_helper] age=16477 cpu=0 pid=2175
___slab_alloc+0x472/0x490
__slab_alloc+0x20/0x40
kmem_cache_alloc_trace+0x151/0x190
drm_dp_add_port+0x1aa/0x1ed0 [drm_kms_helper]
drm_dp_send_link_address+0x526/0x960 [drm_kms_helper]
drm_dp_check_and_send_link_address+0x1ac/0x210 [drm_kms_helper]
drm_dp_mst_link_probe_work+0x77/0xd0 [drm_kms_helper]
process_one_work+0x562/0x1350
worker_thread+0xd9/0x1390
kthread+0x1c5/0x260
ret_from_fork+0x22/0x40
INFO: Freed in drm_dp_free_mst_port+0x50/0x60 [drm_kms_helper] age=7521 cpu=0 pid=2175
__slab_free+0x17f/0x2d0
kfree+0x169/0x180
drm_dp_free_mst_port+0x50/0x60 [drm_kms_helper]
drm_dp_destroy_connector_work+0x2b8/0x490 [drm_kms_helper]
process_one_work+0x562/0x1350
worker_thread+0xd9/0x1390
kthread+0x1c5/0x260
ret_from_fork+0x22/0x40
which on this T460s, would eventually lead to kernel panics in somewhat
random places later in intel_mst_enable_dp() if we got lucky enough.
Signed-off-by: Lyude <cpaul@...hat.com>
Signed-off-by: Dave Airlie <airlied@...hat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/gpu/drm/drm_dp_mst_topology.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -1786,6 +1786,11 @@ int drm_dp_update_payload_part1(struct d
req_payload.start_slot = cur_slots;
if (mgr->proposed_vcpis[i]) {
port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);
+ port = drm_dp_get_validated_port_ref(mgr, port);
+ if (!port) {
+ mutex_unlock(&mgr->payload_lock);
+ return -EINVAL;
+ }
req_payload.num_slots = mgr->proposed_vcpis[i]->num_slots;
} else {
port = NULL;
@@ -1811,6 +1816,9 @@ int drm_dp_update_payload_part1(struct d
mgr->payloads[i].payload_state = req_payload.payload_state;
}
cur_slots += req_payload.num_slots;
+
+ if (port)
+ drm_dp_put_port(port);
}
for (i = 0; i < mgr->max_payloads; i++) {
Powered by blists - more mailing lists