lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 Sep 2006 23:00:57 +0530
From:	Srivatsa Vaddagiri <vatsa@...ibm.com>
To:	Ingo Molnar <mingo@...e.hu>, Nick Piggin <nickpiggin@...oo.com.au>
Cc:	linux-kernel@...r.kernel.org, Kirill Korotaev <dev@...nvz.org>,
	Mike Galbraith <efault@....de>,
	Balbir Singh <balbir@...ibm.com>, sekharan@...ibm.com,
	Andrew Morton <akpm@...l.org>, nagar@...son.ibm.com,
	matthltc@...ibm.com, dipankar@...ibm.com,
	ckrm-tech@...ts.sourceforge.net
Subject: [RFC, PATCH 5/9] CPU Controller V2 - deal with movement of tasks

When a task moves between groups (as initiated by an administrator), it has to
be removed from the runqueue of its old group and added to the runqueue of its
new group. This patch defines this move operation.

Signed-off-by : Srivatsa Vaddagiri <vatsa@...ibm.com>

---

 linux-2.6.18-root/include/linux/sched.h |    3 +
 linux-2.6.18-root/kernel/sched.c        |   61 ++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)

diff -puN kernel/sched.c~cpu_ctlr_move_task kernel/sched.c
--- linux-2.6.18/kernel/sched.c~cpu_ctlr_move_task	2006-09-28 16:40:24.971244616 +0530
+++ linux-2.6.18-root/kernel/sched.c	2006-09-28 17:23:18.115067296 +0530
@@ -7261,10 +7261,71 @@ int sched_get_quota(struct task_grp *tg)
 	return cpu_quota(tg);
 }
 
+/*
+ * Move a task from one group to another. If the task is already on a
+ * runqueue, this involves removing the task from its old group's runqueue
+ * and adding to its new group's runqueue.
+ *
+ * This however is slightly tricky, given the facts that:
+ *
+ *	- some pointer in the task struct (ex: cpuset) represents the group
+ *  	  to which a task belongs.
+ * 	- At any give point during the move operation, the pointer either
+ * 	  points to the old group or to the new group, but not both!
+ *	- dequeue_task/enqueue_task rely on this pointer to know which
+ * 	  task_grp_rq the task is to be removed from/added to.
+ *
+ * Hence the move is accomplished in two steps:
+ *
+ *	1. In first step, sched_pre_move_task() is called with the group
+ * 	   pointer set to the old group to which the task belonged.
+ * 	   If the task was on a runqueue, sched_pre_move_task() will
+ * 	   removes it from the runqueue.
+ *
+ * 	2. In second step, sched_post_move_task() is called with the group
+ *	   pointer set to the new group to which the task belongs.
+ *	   sched_post_move_task() will add the task in its new runqueue
+ * 	   if it was on a runqueue in step 1.
+ *
+ */
+
+int sched_pre_move_task(struct task_struct *tsk, unsigned long *flags,
+			 struct task_grp *tg_old, struct task_grp *tg_new)
+{
+	struct rq *rq;
+	int rc = 0;
+
+	if (tg_new == tg_old)
+		return rc;
+
+	rq = task_rq_lock(tsk, flags);
+
+	rc = 1;
+	if (tsk->array) {
+		rc = 2;
+		deactivate_task(tsk, rq);
+	}
+
+	return rc;
+}
+
+/* called with rq lock held */
+void sched_post_move_task(struct task_struct *tsk, unsigned long *flags, int rc)
+{
+	struct rq *rq = task_rq(tsk);
+
+	if (rc == 2)
+		__activate_task(tsk, rq);
+
+	task_rq_unlock(rq, flags);
+}
+
 static struct task_grp_ops sched_grp_ops = {
 	.alloc_group = sched_alloc_group,
 	.dealloc_group = sched_dealloc_group,
 	.assign_quota = sched_assign_quota,
+	.pre_move_task = sched_pre_move_task,
+	.post_move_task = sched_post_move_task,
 	.get_quota = sched_get_quota,
 };
 
diff -puN include/linux/sched.h~cpu_ctlr_move_task include/linux/sched.h
--- linux-2.6.18/include/linux/sched.h~cpu_ctlr_move_task	2006-09-28 16:40:24.976243856 +0530
+++ linux-2.6.18-root/include/linux/sched.h	2006-09-28 17:23:18.119066688 +0530
@@ -1618,6 +1618,9 @@ struct task_grp_ops {
 	void *(*alloc_group)(void);
 	void (*dealloc_group)(struct task_grp *grp);
 	int (*assign_quota)(struct task_grp *grp, int quota);
+	int (*pre_move_task)(struct task_struct *tsk, unsigned long *,
+				struct task_grp *old, struct task_grp *new);
+	void (*post_move_task)(struct task_struct *tsk, unsigned long *, int);
 	int (*get_quota)(struct task_grp *grp);
 };
 
_
-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ