lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210320153832.1033687-1-axboe@kernel.dk>
Date:   Sat, 20 Mar 2021 09:38:30 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     io-uring@...r.kernel.org
Cc:     torvalds@...ux-foundation.org, ebiederm@...ssion.com,
        linux-kernel@...r.kernel.org, oleg@...hat.com
Subject: [PATCHSET 0/2] PF_IO_WORKER signal tweaks

Hi,

Been trying to ensure that we do the right thing wrt signals and
PF_IO_WORKER threads, and I think there are two cases we need to handle
explicitly:

1) Just don't allow signals to them in general. We do mask everything
   as blocked, outside of SIGKILL, so things like wants_signal() will
   never return true for them. But it's still possible to send them a
   signal via (ultimately) group_send_sig_info(). This will then deliver
   the signal to the original io_uring owning task, and that seems a bit
   unexpected. So just don't allow them in general.

2) STOP is done a bit differently, and we should not allow that either.

Outside of that, I've been looking at same_thread_group(). This will
currently return true for an io_uring task and it's IO workers, since
they do share ->signal. From looking at the kernel users of this, that
actually seems OK for the cases I checked. One is accounting related,
which we obviously want, and others are related to permissions between
tasks. FWIW, I ran with the below and didn't observe any ill effects,
but I'd like someone to actually think about and verify that PF_IO_WORKER
same_thread_group() usage is sane.

diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
index 3f6a0fcaa10c..a580bc0f8aa3 100644
--- a/include/linux/sched/signal.h
+++ b/include/linux/sched/signal.h
@@ -667,10 +667,17 @@ static inline bool thread_group_leader(struct task_struct *p)
 	return p->exit_signal >= 0;
 }
 
+static inline
+bool same_thread_group_account(struct task_struct *p1, struct task_struct *p2)
+{
+	return p1->signal == p2->signal
+}
+
 static inline
 bool same_thread_group(struct task_struct *p1, struct task_struct *p2)
 {
-	return p1->signal == p2->signal;
+	return same_thread_group_account(p1, p2) &&
+			!((p1->flags | p2->flags) & PF_IO_WORKER);
 }
 
 static inline struct task_struct *next_thread(const struct task_struct *p)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5f611658eeab..625110cacc2a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -307,7 +307,7 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
 	 * those pending times and rely only on values updated on tick or
 	 * other scheduler action.
 	 */
-	if (same_thread_group(current, tsk))
+	if (same_thread_group_account(current, tsk))
 		(void) task_sched_runtime(current);
 
 	rcu_read_lock();


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ