lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140801150601.GC13134@localhost.localdomain>
Date:	Fri, 1 Aug 2014 17:06:02 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
	rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
	dvhart@...ux.intel.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH v3 tip/core/rcu 1/9] rcu: Add call_rcu_tasks()

On Thu, Jul 31, 2014 at 07:04:16PM -0700, Paul E. McKenney wrote:
> On Fri, Aug 01, 2014 at 01:57:50AM +0200, Frederic Weisbecker wrote:
> > 
> > So this thread is going to poll every second? I guess something prevents it
> > to run when system is idle somewhere? But I'm not familiar with the whole patchset
> > yet. But even without that it looks like a very annoying noise. why not use something
> > wait/wakeup based?
> 
> And a later patch does the wait/wakeup thing.  Start stupid, add small
> amounts of sophistication incrementally.

Aah indeed! :)

> 
> > > +			flush_signals(current);
> > > +			continue;
> > > +		}
> > > +
> > > +		/*
> > > +		 * Wait for all pre-existing t->on_rq and t->nvcsw
> > > +		 * transitions to complete.  Invoking synchronize_sched()
> > > +		 * suffices because all these transitions occur with
> > > +		 * interrupts disabled.  Without this synchronize_sched(),
> > > +		 * a read-side critical section that started before the
> > > +		 * grace period might be incorrectly seen as having started
> > > +		 * after the grace period.
> > > +		 *
> > > +		 * This synchronize_sched() also dispenses with the
> > > +		 * need for a memory barrier on the first store to
> > > +		 * ->rcu_tasks_holdout, as it forces the store to happen
> > > +		 * after the beginning of the grace period.
> > > +		 */
> > > +		synchronize_sched();
> > > +
> > > +		/*
> > > +		 * There were callbacks, so we need to wait for an
> > > +		 * RCU-tasks grace period.  Start off by scanning
> > > +		 * the task list for tasks that are not already
> > > +		 * voluntarily blocked.  Mark these tasks and make
> > > +		 * a list of them in rcu_tasks_holdouts.
> > > +		 */
> > > +		rcu_read_lock();
> > > +		for_each_process_thread(g, t) {
> > > +			if (t != current && ACCESS_ONCE(t->on_rq) &&
> > > +			    !is_idle_task(t)) {
> > > +				get_task_struct(t);
> > > +				t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw);
> > > +				ACCESS_ONCE(t->rcu_tasks_holdout) = 1;
> > > +				list_add(&t->rcu_tasks_holdout_list,
> > > +					 &rcu_tasks_holdouts);
> > > +			}
> > > +		}
> > > +		rcu_read_unlock();
> > > +
> > > +		/*
> > > +		 * Each pass through the following loop scans the list
> > > +		 * of holdout tasks, removing any that are no longer
> > > +		 * holdouts.  When the list is empty, we are done.
> > > +		 */
> > > +		while (!list_empty(&rcu_tasks_holdouts)) {
> > > +			schedule_timeout_interruptible(HZ / 10);
> > 
> > OTOH here it is not annoying because it should only happen when rcu task
> > is used, which should be rare.
> 
> Glad you like it!
> 
> I will likely also add checks for other things needing the current CPU.

Ok, thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ