lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Jun 2011 22:25:50 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	David Miller <davem@...emloft.net>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Russell King <rmk@....linux.org.uk>,
	Paul Mundt <lethal@...ux-sh.org>,
	Jeff Dike <jdike@...toit.com>,
	Richard Weinberger <richard@....at>,
	Tony Luck <tony.luck@...el.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Mel Gorman <mel@....ul.ie>, Nick Piggin <npiggin@...nel.dk>,
	Namhyung Kim <namhyung@...il.com>, ak@...ux.intel.com,
	shaohua.li@...el.com, alex.shi@...el.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	"Rafael J. Wysocki" <rjw@...k.pl>
Subject: Re: [GIT PULL] Re: REGRESSION: Performance regressions from
 switching anon_vma->lock to mutex


* Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:

> > The funny thing about this workload is that context-switches are 
> > really a fastpath here and we are using anonymous IRQ-triggered 
> > softirqs embedded in random task contexts as a workaround for 
> > that.
> 
> The other thing that the IRQ-triggered softirqs do is to get the 
> callbacks invoked in cases where a CPU-bound user thread is never 
> context switching.

Yeah - but this workload didnt have that.

> Of course, one alternative might be to set_need_resched() to force 
> entry into the scheduler as needed.

No need for that: we can just do the callback not in softirq but in 
regular syscall context in that case, in the return-to-userspace 
notifier. (see TIF_USER_RETURN_NOTIFY and the USER_RETURN_NOTIFIER 
facility)

Abusing a facility like setting need_resched artificially will 
generally cause trouble.

> > [ I think we'll have to revisit this issue and do it properly:
> >   quiescent state is mostly defined by context-switches here, so we
> >   could do the RCU callbacks from the task that turns a CPU
> >   quiescent, right in the scheduler context-switch path - perhaps
> >   with an option for SCHED_FIFO tasks to *not* do GC.
> 
> I considered this approach for TINY_RCU, but dropped it in favor of 
> reducing the interlocking between the scheduler and RCU callbacks. 
> Might be worth revisiting, though.  If SCHED_FIFO task omit RCU 
> callback invocation, then there will need to be some override for 
> CPUs with lots of SCHED_FIFO load, probably similar to RCU's 
> current blimit stuff.

I wouldnt complicate it much for SCHED_FIFO: SCHED_FIFO tasks are 
special and should never run long.

> >   That could possibly be more cache-efficient than softirq execution,
> >   as we'll process a still-hot pool of callbacks instead of doing
> >   them only once per timer tick. It will also make the RCU GC
> >   behavior HZ independent. ]
> 
> Well, the callbacks will normally be cache-cold in any case due to 
> the grace-period delay, [...]

The workloads that are the most critical in this regard tend to be 
context switch intense, so the grace period expiry latency should be 
pretty short.

Or at least significantly shorter than today's HZ frequency, right? 
HZ would still provide an upper bound for the latency.

Btw., the current worst-case grace period latency is in reality more 
like two timer ticks: one for the current CPU to expire and another 
for the longest "other CPU" expiry, right? Average expiry (for 
IRQ-poor workloads) would be 1.5 timer ticks. (if i got my stat 
calculations right!)

> [...] but on the other hand, both tick-independence and the ability 
> to shield a given CPU from RCU callback execution might be quite 
> useful. [...]

Yeah.

> [...] The tick currently does the following for RCU:
> 
> 1.	Informs RCU of user-mode execution (rcu_sched and rcu_bh
> 	quiescent state).
> 
> 2.	Informs RCU of non-dyntick idle mode (again, rcu_sched and
> 	rcu_bh quiescent state).
> 
> 3.	Kicks the current CPU's RCU core processing as needed in
> 	response to actions from other CPUs.
> 
> Frederic's work avoiding ticks in long-running user-mode tasks 
> might take care of #1, and it should be possible to make use of the 
> current dyntick-idle APIs to deal with #2.  Replacing #3 
> efficiently will take some thought.

What is the longest delay the scheduler tick can take typically - 40 
msecs? That would then be the worst-case grace period latency for 
workloads that neither do context switches nor trigger IRQs, right?

> > In any case the proxy kthread model clearly sucked, no argument 
> > about that.
> 
> Indeed, I lost track of the global nature of real-time scheduling.
> :-(

Btw., i think that test was pretty bad: running exim as SCHED_FIFO??

But it does not excuse the kthread model.

> Whatever does the boosting will need to have process context and 
> can be subject to delays, so that pretty much needs to be a 
> kthread. But it will context-switch quite rarely, so should not be 
> a problem.

So user-return notifiers ought to be the ideal platform for that, 
right? We don't even have to touch the scheduler: anything that 
schedules will eventually return to user-space, at which point the 
RCU GC magic can run.

And user-return-notifiers can be triggered from IRQs as well.

That allows us to get rid of softirqs altogether and maybe even speed 
the whole thing up and allow it to be isolated better.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ