lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140730213219.27604.11218.stgit@localhost>
Date:	Thu, 31 Jul 2014 01:42:35 +0400
From:	Kirill Tkhai <tkhai@...dex.ru>
To:	linux-kernel@...r.kernel.org
Cc:	nicolas.pitre@...aro.org, peterz@...radead.org, pjt@...gle.com,
	oleg@...hat.com, rostedt@...dmis.org, umgwanakikbuti@...il.com,
	ktkhai@...allels.com, tim.c.chen@...ux.intel.com, mingo@...nel.org
Subject: [PATCH v3 0/5] sched: Add on_rq states and remove several double rq
 locks

This series aims to get rid of some places where locks of two RQs are held
at the same time.

Patch [1/5] is a preparation/cleanup. It replaces old (task_struct::on_rq == 1)
with new (task_struct::on_rq == ONRQ_QUEUED) everywhere. No functional changes.

Patch [2/5] is main in the series. It introduces new ONRQ_MIGRATING state and
teaches scheduler to understand it (we need little changes in try_to_wake_up()
and task_rq_lock() family). This will be used in the following way:

        (we are changing task's rq)

        raw_spin_lock(&src_rq->lock);

	p = ...; /* Some src_rq task */

        dequeue_task(src_rq, p, 0);
        p->on_rq = ONRQ_MIGRATING;
        set_task_cpu(p, dst_cpu);
        raw_spin_unlock(&src_rq->lock);

	/*
	 * Now p os dequeued, and both
         * RQ locks are unlocked, but
	 * its on_rq is not zero.
	 * Nobody can manipulate p
	 * while it's migrating,
	 * even when spinlocks are
	 * unlocked.
	 */

        raw_spin_lock(&dst_rq->lock);
        p->on_rq = ONRQ_QUEUED;
        enqueue_task(dst_rq, p, 0);
        raw_spin_unlock(&dst_rq->lock);

Patches [3,4,5/5] remove double locks and use new ONRQ_MIGRATING state.
They allow unlocked using of 3-4 function, which looks safe for me.

The profit is double_rq_lock() is not need now, so we reduce the
total time when RQs are locked.

---

Kirill Tkhai (5):
      sched: Wrapper for checking task_struct::on_rq
      sched: Teach scheduler to understand ONRQ_MIGRATING state
      sched: Remove double_rq_lock() from __migrate_task()
      sched/fair: Remove double_lock_balance() from active_load_balance_cpu_stop()
      sched/fair: Remove double_lock_balance() from load_balance()


 kernel/sched/core.c      |  115 +++++++++++++++++--------------
 kernel/sched/deadline.c  |   14 ++--
 kernel/sched/fair.c      |  172 +++++++++++++++++++++++++++++++---------------
 kernel/sched/rt.c        |   16 ++--
 kernel/sched/sched.h     |   13 +++
 kernel/sched/stop_task.c |    2 -
 6 files changed, 211 insertions(+), 121 deletions(-)

--
Signed-off-by: Kirill Tkhai <ktkhai@...allels.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ