lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150624073503.GH3644@twins.programming.kicks-ass.net>
Date:	Wed, 24 Jun 2015 09:35:03 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Oleg Nesterov <oleg@...hat.com>, tj@...nel.org, mingo@...hat.com,
	linux-kernel@...r.kernel.org, der.herr@...r.at, dave@...olabs.net,
	riel@...hat.com, viro@...IV.linux.org.uk,
	torvalds@...ux-foundation.org
Subject: Re: [RFC][PATCH 12/13] stop_machine: Remove lglock

On Tue, Jun 23, 2015 at 11:26:26AM -0700, Paul E. McKenney wrote:
> > I really think you're making that expedited nonsense far too accessible.
> 
> This has nothing to do with accessibility and everything to do with
> robustness.  And with me not becoming the triage center for too many
> non-RCU bugs.

But by making it so you're rewarding abuse instead of flagging it :-(

> > > And we still need to be able to drop back to synchronize_sched()
> > > (AKA wait_rcu_gp(call_rcu_sched) in this case) in case we have both a
> > > creative user and a long-running RCU-sched read-side critical section.
> > 
> > No, a long-running RCU-sched read-side is a bug and we should fix that,
> > its called a preemption-latency, we don't like those.
> 
> Yes, we should fix them.  No, they absolutely must not result in a
> meltdown of some unrelated portion of the kernel (like RCU), particularly
> if this situation occurs on some system running a production workload
> that doesn't happen to care about preemption latency.

I still don't see a problem here though; the stop_one_cpu() invocation
for the CPU that's suffering its preemption latency will take longer,
but so what?

How does polling and dropping back to sync_rcu() generate better
behaviour than simply waiting for the completion?

> > > > +		stop_one_cpu(cpu, synchronize_sched_expedited_cpu_stop, NULL);
> > > 
> > > My thought was to use smp_call_function_single(), and to have the function
> > > called recheck dyntick-idle state, avoiding doing a set_tsk_need_resched()
> > > if so.
> > 
> > set_tsk_need_resched() is buggy and should not be used.
> 
> OK, what API is used for this purpose?

As per exception you (rcu) already have access to resched_cpu(), use
that -- if it doesn't do what you need it to, we'll fix it, you're the
only consumer of it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ