lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100201144759.GD10894@Krystal>
Date:	Mon, 1 Feb 2010 09:47:59 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
	dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
	niv@...ibm.com, tglx@...utronix.de, Valdis.Kletnieks@...edu,
	dhowells@...hat.com
Subject: Re: [patch 2/3] scheduler: add full memory barriers upon task
	switch at runqueue lock/unlock

* Nick Piggin (npiggin@...e.de) wrote:
> On Mon, Feb 01, 2010 at 11:36:01AM +0100, Peter Zijlstra wrote:
> > On Mon, 2010-02-01 at 21:11 +1100, Nick Piggin wrote:
> > > All, but one at a time, no? How much of a DoS really is taking these
> > > locks for a handful of cycles each, per syscall?
> > 
> > I was more worrying about the cacheline trashing than lock hold times
> > there.
> 
> Well, same issue really. Look at all the unprived files in /proc
> for example that can look through all per-cpu cachelines. It just
> takes a single read syscall to do a lot of them too.
> 
>  
> > > I mean, we have LOTS of syscalls that take locks, and for a lot longer,
> > > (look at dcache_lock).
> > 
> > Yeah, and dcache is a massive pain, isn't it ;-)
> 
> My point is, I don't think it is something we can realistically
> care much about and it is nowhere near a new or unique problem
> being added by this one patch.
> 
> It is really a RoS, reduction of service, rather than a DoS. And
> any time we allow an unpriv user on our system, we have RoS potential :)
> 
>  
> > > I think we basically just have to say that locking primitives should be
> > > somewhat fair, and not be held for too long, it should more or less
> > > work.
> > 
> > Sure, it'll more of less work, but he's basically making rq->lock a
> > global lock instead of a per-cpu lock.
> > 
> > > If the locks are getting contended, then the threads calling
> > > sys_membarrier are going to be spinning longer too, using more CPU time,
> > > and will get scheduled away...
> > 
> > Sure, and increased spinning reduces the total throughput.
> > 
> > > If there is some particular problem on -rt because of the rq locks,
> > > then I guess you could consider whether to add more overhead to your
> > > ctxsw path to reduce the problem, or simply not support sys_membarrier
> > > for unprived users in the first place.
> > 
> > Right, for -rt we might need to do that, but its just that rq->lock is a
> > very hot lock, and adding basically unlimited trashing to it didn't seem
> > like a good idea.
> > 
> > Also, I'm thinking making it a priv syscall basically renders it useless
> > for Mathieu.
> 
> Well I just mean that it's something for -rt to work out. Apps can
> still work if the call is unsupported completely.

OK, so we seem to be settling for the spinlock-based sys_membarrier()
this time, which is much less intrusive in terms of scheduler
fast path modification, but adds more system overhead each time
sys_membarrier() is called. This trade-off makes sense to me, as we
expect the scheduler to execute _much_ more often than sys_membarrier().

When I get confirmation that's the route to follow from both of you,
I'll go back to the spinlock-based scheme for v9.

Thanks,

Mathieu

>  
> 
> > Anyway, it might be I'm just paranoid... but archs with large core count
> > and lazy tlb flush seem particularly vulnerable.

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ