lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 8 Aug 2014 21:13:26 +0200 From: Peter Zijlstra <peterz@...radead.org> To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> Cc: linux-kernel@...r.kernel.org, mingo@...nel.org, laijs@...fujitsu.com, dipankar@...ibm.com, akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com, josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com Subject: Re: [PATCH v3 tip/core/rcu 1/9] rcu: Add call_rcu_tasks() So I think you can make the entire thing work with rcu_note_context_switch(). If we have the sync thing do something like: for_each_task(t) { atomic_inc(&rcu_tasks); atomic_or(&t->rcu_attention, RCU_TASK); smp_mb__after_atomic(); if (!t->on_rq) { if (atomic_test_and_clear(&t->rcu_attention, RCU_TASK)) atomic_dec(&rcu_tasks); } } wait_event(&rcu_tasks_wq, !atomic_read(&rcu_tasks)); And then we have rcu_task_note_context_switch() (as called from rcu_note_context_switch) do: /* we want actual context switches, ignore preemption */ if (preempt_count() & PREEMPT_ACTIVE) return; /* if not marked for RCU attention, bail */ if (!(atomic_read(&t->rcu_attention) & RCU_TASK)) return; /* raced with sync_rcu_task(), all done */ if (!atomic_test_and_clear(&t->rcu_attention, RCU_TASK)) return; /* not the last.. */ if (!atomic_dec_and_test(&rcu_tasks)) return; wake_up(&rcu_task_rq); The idea is to share rcu_attention with rcu_preempt, such that we only touch a single 'extra' cacheline in case RCU doesn't need to pay attention to this task. Also, it would be good if we can manage to squeeze this variable in a cacheline that's already touched by the schedule() so as not to incur undue overhead. And on that, you probably should change rcu_sched_rq() to read: this_cpu_inc(rcu_sched_data.passed_quiesce); That avoids touching the per-cpu data offset. And it would be very good if we could avoid the unconditional IRQ flag fiddling in rcu_preempt_note_context_switch(), them expensive, this looks entirely feasibly in the 'normal' case where t->rcu_read_unlock_special doesn't have RCU_READ_UNLOCK_NEED_QS set. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists