lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D90F918E-A69B-434C-9593-D1E253F150F4@oracle.com>
Date: Tue, 8 Apr 2025 13:17:39 +0000
From: Haakon Bugge <haakon.bugge@...cle.com>
To: Sharath Maddibande Srinivasan <sharath.srinivasan@...cle.com>
CC: Leon Romanovsky <leon@...nel.org>, "jgg@...pe.ca" <jgg@...pe.ca>,
        "phaddad@...dia.com" <phaddad@...dia.com>,
        "markzhang@...dia.com"
	<markzhang@...dia.com>,
        OFED mailing list <linux-rdma@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        Aron Silverton
	<aron.silverton@...cle.com>
Subject: Re: [PATCH v2] RDMA/cma: Fix workqueue crash in
 cma_netevent_work_handler

> struct rdma_cm_id has member "struct work_struct net_work"
> that is reused for enqueuing cma_netevent_work_handler()s
> onto cma_wq.
> 
> Below crash[1] can occur if more than one call to
> cma_netevent_callback() occurs in quick succession,
> which further enqueues cma_netevent_work_handler()s for the
> same rdma_cm_id, overwriting any previously queued work-item(s)
> that was just scheduled to run i.e. there is no guarantee
> the queued work item may run between two successive calls
> to cma_netevent_callback() and the 2nd INIT_WORK would overwrite
> the 1st work item (for the same rdma_cm_id), despite grabbing
> id_table_lock during enqueue.
> 
> Also drgn analysis [2] indicates the work item was likely overwritten.
> 
> Fix this by moving the INIT_WORK() to __rdma_create_id(),
> so that it doesn't race with any existing queue_work() or
> its worker thread.
> 
> [1] Trimmed crash stack:
> =============================================
> BUG: kernel NULL pointer dereference, address: 0000000000000008
> kworker/u256:6 ... 6.12.0-0...
> Workqueue:  cma_netevent_work_handler [rdma_cm] (rdma_cm)
> RIP: 0010:process_one_work+0xba/0x31a
> Call Trace:
> worker_thread+0x266/0x3a0
> kthread+0xcf/0x100
> ret_from_fork+0x31/0x50
> ret_from_fork_asm+0x1a/0x30
> =============================================
> 
> [2] drgn crash analysis:
> 
> trace = prog.crashed_thread().stack_trace()
> trace
> (0)  crash_setup_regs (./arch/x86/include/asm/kexec.h:111:15)
> (1)  __crash_kexec (kernel/crash_core.c:122:4)
> (2)  panic (kernel/panic.c:399:3)
> (3)  oops_end (arch/x86/kernel/dumpstack.c:382:3)
> ...
> (8)  process_one_work (kernel/workqueue.c:3168:2)
> (9)  process_scheduled_works (kernel/workqueue.c:3310:3)
> (10) worker_thread (kernel/workqueue.c:3391:4)
> (11) kthread (kernel/kthread.c:389:9)
> 
> Line workqueue.c:3168 for this kernel version is in process_one_work():
> 3168	strscpy(worker->desc, pwq->wq->name, WORKER_DESC_LEN);
> 
> trace[8]["work"]
> *(struct work_struct *)0xffff92577d0a21d8 = {
> 	.data = (atomic_long_t){
> 		.counter = (s64)536870912,    <=== Note
> 	},
> 	.entry = (struct list_head){
> 		.next = (struct list_head *)0xffff924d075924c0,
> 		.prev = (struct list_head *)0xffff924d075924c0,
> 	},
> 	.func = (work_func_t)cma_netevent_work_handler+0x0 = 0xffffffffc2cec280,
> }
> 
> Suspicion is that pwq is NULL:
> trace[8]["pwq"]
> (struct pool_workqueue *)<absent>
> 
> In process_one_work(), pwq is assigned from:
> struct pool_workqueue *pwq = get_work_pwq(work);
> 
> and get_work_pwq() is:
> static struct pool_workqueue *get_work_pwq(struct work_struct *work)
> {
> 	unsigned long data = atomic_long_read(&work->data);
> 
> 	if (data & WORK_STRUCT_PWQ)
> 		return work_struct_pwq(data);
> 	else
> 		return NULL;
> }
> 
> WORK_STRUCT_PWQ is 0x4:
> print(repr(prog['WORK_STRUCT_PWQ']))
> Object(prog, 'enum work_flags', value=4)
> 
> But work->data is 536870912 which is 0x20000000.
> So, get_work_pwq() returns NULL and we crash in process_one_work():
> 3168	strscpy(worker->desc, pwq->wq->name, WORKER_DESC_LEN);
> =============================================
> 
> Fixes: 925d046e7e52 ("RDMA/core: Add a netevent notifier to cma")
> Cc: stable@...r.kernel.org
> Co-developed-by: Håkon Bugge <haakon.bugge@...cle.com>
> Signed-off-by: Håkon Bugge <haakon.bugge@...cle.com>
> Signed-off-by: Sharath Srinivasan <sharath.srinivasan@...cle.com>

A gentle ping on this patch.


Thxs, Håkon


> ---
> v1->v2 cc:stable@...r.kernel.org
> ---
> drivers/infiniband/core/cma.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
> index 91db10515d74..176d0b3e4488 100644
> --- a/drivers/infiniband/core/cma.c
> +++ b/drivers/infiniband/core/cma.c
> @@ -72,6 +72,8 @@ static const char * const cma_events[] = {
> static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
> 			      enum ib_gid_type gid_type);
> 
> +static void cma_netevent_work_handler(struct work_struct *_work);
> +
> const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
> {
> 	size_t index = event;
> @@ -1033,6 +1035,7 @@ __rdma_create_id(struct net *net, rdma_cm_event_handler event_handler,
> 	get_random_bytes(&id_priv->seq_num, sizeof id_priv->seq_num);
> 	id_priv->id.route.addr.dev_addr.net = get_net(net);
> 	id_priv->seq_num &= 0x00ffffff;
> +	INIT_WORK(&id_priv->id.net_work, cma_netevent_work_handler);
> 
> 	rdma_restrack_new(&id_priv->res, RDMA_RESTRACK_CM_ID);
> 	if (parent)
> @@ -5227,7 +5230,6 @@ static int cma_netevent_callback(struct notifier_block *self,
> 		if (!memcmp(current_id->id.route.addr.dev_addr.dst_dev_addr,
> 			   neigh->ha, ETH_ALEN))
> 			continue;
> -		INIT_WORK(&current_id->id.net_work, cma_netevent_work_handler);
> 		cma_id_get(current_id);
> 		queue_work(cma_wq, &current_id->id.net_work);
> 	}
> --
> 2.39.5 (Apple Git-154)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ