lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1410548191.12906.18.camel@linux-t7sj.site>
Date:	Fri, 12 Sep 2014 11:56:31 -0700
From:	Davidlohr Bueso <dave@...olabs.net>
To:	paulmck@...ux.vnet.ibm.com
Cc:	peterz@...radead.org, mingo@...nel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/9] locktorture: Support mutexes

On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
> On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> > +static void torture_mutex_delay(struct torture_random_state *trsp)
> > +{
> > +	const unsigned long longdelay_ms = 100;
> > +
> > +	/* We want a long delay occasionally to force massive contention.  */
> > +	if (!(torture_random(trsp) %
> > +	      (nrealwriters_stress * 2000 * longdelay_ms)))
> > +		mdelay(longdelay_ms * 5);
> 
> So let's see...  We wait 500 milliseconds about once per 200,000 operations
> per writer.  So if we have 5 writers, we wait 500 milliseconds per million
> operations.  So each writer will do about 200,000 operations, then there
> will be a half-second gap.  But each short operation holds the lock for
> 20 milliseconds, which takes several hours to work through the million
> operations.
> 
> So it looks to me like you are in massive contention state either way,
> at least until the next stutter interval shows up.
> 
> Is that the intent?  Or am I missing something here?

Ah, nice description. Yes, I am aiming for constant massive contention
(should have mentioned this, sorry). I believe it stresses the more
interesting parts of mutexes -- and rwsems, for that matter. If you
think it's excessive, we could decrease the the large wait and/or
increase the short one. I used the factor of the delay by the default
stutter value -- we could also make it always equal.

> > +	else
> > +		mdelay(longdelay_ms / 5);
> > +#ifdef CONFIG_PREEMPT
> > +	if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
> > +		preempt_schedule();  /* Allow test to be preempted. */
> > +#endif
> > +}
> > +
> > +static void torture_mutex_unlock(void) __releases(torture_mutex)
> > +{
> > +	mutex_unlock(&torture_mutex);
> > +}
> > +
> > +static struct lock_torture_ops mutex_lock_ops = {
> > +	.writelock	= torture_mutex_lock,
> > +	.write_delay	= torture_mutex_delay,
> > +	.writeunlock	= torture_mutex_unlock,
> > +	.name		= "mutex_lock"
> > +};
> > +
> >  /*
> >   * Lock torture writer kthread.  Repeatedly acquires and releases
> >   * the lock, checking for duplicate acquisitions.
> > @@ -352,7 +389,7 @@ static int __init lock_torture_init(void)
> >  	int i;
> >  	int firsterr = 0;
> >  	static struct lock_torture_ops *torture_ops[] = {
> > -		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
> > +		&lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
> >  	};
> > 
> >  	if (!torture_init_begin(torture_type, verbose, &torture_runnable))
> > -- 
> 
> And I queued the following patch to catch up the scripting.

Thanks! Completely overlooked the scripting bits. I'll keep it in mind
in the future.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ