lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Oct 2022 12:14:02 -0400
From:   Daniel Jordan <daniel.m.jordan@...cle.com>
To:     Nicolai Stange <nstange@...e.de>
Cc:     Steffen Klassert <steffen.klassert@...unet.com>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Martin Doucha <mdoucha@...e.cz>, linux-crypto@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] padata: avoid potential UAFs to the padata_shell
 from padata_reorder()

On Wed, Oct 19, 2022 at 10:37:08AM +0200, Nicolai Stange wrote:
> Even though the parallel_data "pd" instance passed to padata_reorder() is
> guaranteed to exist as per the reference held by its callers, the same is
> not true for the associated padata_shell, pd->ps. More specifically, once
> the last padata_priv request has been completed, either at entry from
> padata_reorder() or concurrently to it, the padata API users are well
> within their right to free the padata_shell instance.

The synchronize_rcu change seems to make padata_reorder safe from freed
ps's with the exception of a straggler reorder_work.  For that, I think
something like this hybrid of your code and mine is enough to plug the
hole.  It's on top of 1-2 and my hunk from 3.  It has to take an extra
ref on pd, but only in the rare case where the reorder work is used.
Thoughts?

diff --git a/kernel/padata.c b/kernel/padata.c
index cd6740ae6629..f14c256a0ee3 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -277,7 +277,7 @@ static struct padata_priv *padata_find_next(struct parallel_data *pd,
 
 static void padata_reorder(struct parallel_data *pd)
 {
-	struct padata_instance *pinst = pd->ps->pinst;
+	struct padata_instance *pinst;
 	int cb_cpu;
 	struct padata_priv *padata;
 	struct padata_serial_queue *squeue;
@@ -314,7 +314,7 @@ static void padata_reorder(struct parallel_data *pd)
 		list_add_tail(&padata->list, &squeue->serial.list);
 		spin_unlock(&squeue->serial.lock);
 
-		queue_work_on(cb_cpu, pinst->serial_wq, &squeue->work);
+		queue_work_on(cb_cpu, pd->ps->pinst->serial_wq, &squeue->work);
 	}
 
 	spin_unlock_bh(&pd->lock);
@@ -330,8 +330,10 @@ static void padata_reorder(struct parallel_data *pd)
 	smp_mb();
 
 	reorder = per_cpu_ptr(pd->reorder_list, pd->cpu);
-	if (!list_empty(&reorder->list) && padata_find_next(pd, false))
-		queue_work(pinst->serial_wq, &pd->reorder_work);
+	if (!list_empty(&reorder->list) && padata_find_next(pd, false)) {
+		if (queue_work(pd->ps->pinst->serial_wq, &pd->reorder_work))
+			padata_get_pd(pd);
+	}
 }
 
 static void invoke_padata_reorder(struct work_struct *work)
@@ -342,6 +344,7 @@ static void invoke_padata_reorder(struct work_struct *work)
 	pd = container_of(work, struct parallel_data, reorder_work);
 	padata_reorder(pd);
 	local_bh_enable();
+	padata_put_pd(pd);
 }
 
 static void padata_serial_worker(struct work_struct *serial_work)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ