lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Jun 2007 13:58:15 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Ingo Molnar <mingo@...e.hu>
cc:	Eric Dumazet <dada1@...mosbay.com>,
	Chuck Ebbert <cebbert@...hat.com>,
	Jarek Poplawski <jarkao2@...pl>,
	Miklos Szeredi <miklos@...redi.hu>, chris@...ee.ca,
	linux-kernel@...r.kernel.org, tglx@...utronix.de,
	akpm@...ux-foundation.org
Subject: Re: [patch] spinlock debug: make looping nicer



On Thu, 21 Jun 2007, Ingo Molnar wrote:
> 
> btw., back then we also tried a spin_is_locked() based inner loop but it 
> didnt help the ->tree_lock lockups either. In any case i very much agree 
> that the 'nicer' looping should be added again - the patch below does 
> that. (build and boot tested)

Ok, I'm definitely not going to apply it right now, though.

> and the reason that this didnt help the ->tree_lock lockup is likely the 
> same why wait_task_inactive() broke _independently_ of the 'niceness' of 
> the spin-lock operation: there were too few instructions between 
> releasing the lock and re-acquiring it again can cause permanent 
> starvation of another CPU. No amount of logic on the spinning side can 
> overcome this, if acquire/release critical sections are following each 
> other too fast.

Exactly.

The only way to handle that case is to make sure that the person who 
*gets* the spinlock will slow down. The person who doesn't get it can't do 
anything at all about the fact that he's locked out.

A way to do that (as already mentioned) is to have a "this lock is 
contended" flag, and have the person who gets the lock do something about 
it (where the "something" might actually be as simple as saying "When I 
release a lock that somebody marked as having lots of contention, I will 
clear the contention flag, and then just delay myself").

Side note: that trivial approach only really helps for a *single* thread 
that gets it very much (like the example in wait_task_inactive). For true 
contention with multiple different CPU's that can *all* have the bad 
behaviour, you do actually need real queueing.

			Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ