lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Sep 2014 19:13:55 -0700
From:	Davidlohr Bueso <dave@...olabs.net>
To:	paulmck@...ux.vnet.ibm.com
Cc:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/9] locktorture: Support mutexes

On Fri, 2014-09-12 at 12:12 -0700, Paul E. McKenney wrote:
> On Fri, Sep 12, 2014 at 11:56:31AM -0700, Davidlohr Bueso wrote:
> > On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
> > > On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> > > > +static void torture_mutex_delay(struct torture_random_state *trsp)
> > > > +{
> > > > +	const unsigned long longdelay_ms = 100;
> > > > +
> > > > +	/* We want a long delay occasionally to force massive contention.  */
> > > > +	if (!(torture_random(trsp) %
> > > > +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> > > > +		mdelay(longdelay_ms * 5);
> > > 
> > > So let's see...  We wait 500 milliseconds about once per 200,000 operations
> > > per writer.  So if we have 5 writers, we wait 500 milliseconds per million
> > > operations.  So each writer will do about 200,000 operations, then there
> > > will be a half-second gap.  But each short operation holds the lock for
> > > 20 milliseconds, which takes several hours to work through the million
> > > operations.
> > > 
> > > So it looks to me like you are in massive contention state either way,
> > > at least until the next stutter interval shows up.
> > > 
> > > Is that the intent?  Or am I missing something here?
> > 
> > Ah, nice description. Yes, I am aiming for constant massive contention
> > (should have mentioned this, sorry). I believe it stresses the more
> > interesting parts of mutexes -- and rwsems, for that matter. If you
> > think it's excessive, we could decrease the the large wait and/or
> > increase the short one. I used the factor of the delay by the default
> > stutter value -- we could also make it always equal.
> 
> Don't get me wrong -- I am all for massive contention testing.  It is
> just that from what I can see, you aren't getting any real additional
> benefit out of the 500-millisecond wait.  Having even as few as (say)
> three tasks each repeatedly acquiring the lock and blocking for 20
> milliseconds ("else" clause below) will give you maximal contention.
> I cannot see how occasionally blocking for 500 milliseconds can do much
> of anything to increase the contention level.
> 
> Now if the common case was to acquire and then immediately release the
> lock, I could see how throwing in the occasional delay would be very
> useful. 

Right, that's what we do in the case of spinlock torturing.

> But for exclusive locks, a few tens of microseconds delay would
> probably suffice to give you a maximal contention event.  Yes, you do
> have a one-jiffy delay in the lock_torture_writer() loop, but it happens
> only one loop out of one million -- and if that is what you are worried
> about, a two-jiffy delay in the critical section would -guarantee- you
> a maximal contention event in most cases.

Ok yeah, no need to increase the jiffy delay.

> So my concern is that the large values you have are mostly slowing down
> the test and thus reducing its intensity.  But again, I could easily be
> missing something here.

You aren't. My rationale behind it was to have long and the occasional
very-long hold times. I'm thinking of either removing the 500 ms delay
altogether, or decreasing both delays by ~10x. That should provide a
more distributed contention between level between both delays. Threads
blocking for ~2ms should be quite ok for us.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ