[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190326132955.GA16837@redhat.com>
Date: Tue, 26 Mar 2019 14:29:55 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Christopher Lameter <cl@...ux.com>
Cc: Waiman Long <longman@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
selinux@...r.kernel.org, Paul Moore <paul@...l-moore.com>,
Stephen Smalley <sds@...ho.nsa.gov>,
Eric Paris <eparis@...isplace.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>
Subject: Re: [PATCH 2/4] signal: Make flush_sigqueue() use free_q to release
memory
Sorry, I am sick and can't work, hopefully I'll return tomorrow.
On 03/22, Christopher Lameter wrote:
>
> On Fri, 22 Mar 2019, Waiman Long wrote:
>
> > I am looking forward to it.
>
> There is also alrady rcu being used in these paths. kfree_rcu() would not
> be enough? It is an estalished mechanism that is mature and well
> understood.
But why do we want to increase the number of rcu callbacks in flight?
For the moment, lets discuss the exiting tasks only. The only reason why
flush_sigqueue(&tsk->pending) needs spin_lock_irq() is the race with
release_posix_timer()->sigqueue_free() from another thread which can remove
a SIGQUEUE_PREALLOC'ed sigqueue from list. With the simple patch below
flush_sigqueue() can be called lockless with irqs enabled.
However, this change is not enough, we need to do something similar with
do_sigaction()->flush_sigqueue_mask(), and this is less simple.
So I won't really argue with kfree_rcu() but I am not sure this is the best
option.
Oleg.
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -85,6 +85,17 @@ static void __unhash_process(struct task_struct *p, bool group_dead)
list_del_rcu(&p->thread_node);
}
+// Rename me and move into signal.c
+void remove_prealloced(struct sigpending *queue)
+{
+ struct sigqueue *q, *t;
+
+ list_for_each_entry_safe(q, t, &queue->list, list) {
+ if (q->flags & SIGQUEUE_PREALLOC)
+ list_del_init(&q->list);
+ }
+}
+
/*
* This function expects the tasklist_lock write-locked.
*/
@@ -160,16 +171,15 @@ static void __exit_signal(struct task_struct *tsk)
* Do this under ->siglock, we can race with another thread
* doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
*/
- flush_sigqueue(&tsk->pending);
+ if (!group_dead)
+ remove_prealloced(&tsk->pending);
tsk->sighand = NULL;
spin_unlock(&sighand->siglock);
__cleanup_sighand(sighand);
clear_tsk_thread_flag(tsk, TIF_SIGPENDING);
- if (group_dead) {
- flush_sigqueue(&sig->shared_pending);
+ if (group_dead)
tty_kref_put(tty);
- }
}
static void delayed_put_task_struct(struct rcu_head *rhp)
@@ -221,6 +231,11 @@ void release_task(struct task_struct *p)
write_unlock_irq(&tasklist_lock);
cgroup_release(p);
release_thread(p);
+
+ flush_sigqueue(&p->pending);
+ if (thread_group_leader(p))
+ flush_sigqueue(&p->signal->shared_pending);
+
call_rcu(&p->rcu, delayed_put_task_struct);
p = leader;
Powered by blists - more mailing lists