lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1440632657.32300.33.camel@j-VirtualBox>
Date:	Wed, 26 Aug 2015 16:44:17 -0700
From:	Jason Low <jason.low2@...com>
To:	George Spelvin <linux@...izon.com>
Cc:	linux-kernel@...r.kernel.org, jason.low2@...com
Subject: Re: [PATCH 3/3] timer: Reduce unnecessary sighand lock contention

On Wed, 2015-08-26 at 15:33 -0400, George Spelvin wrote:
> And some more comments on the series...
> 
> > @@ -626,6 +628,7 @@ struct task_cputime_atomic {
> >  struct thread_group_cputimer {
> > 	struct task_cputime_atomic cputime_atomic;
> > 	int running;
> >+	int checking_timer;
> > };
> 
> Why not make both running and checking_timer actual booleans, so the
> pair is only 16 bits rather than 64?
> 
> I suppose since sum_exec_runtime in struct task_cputime_atomic is 64 bits,
> alignment means there's no actual improvement.  It's just my personal
> preference (Linus disagrees with me!) for the bool type for documentation.
> 
> (Compile-tested patch appended.)

I can include your patch in the series and then use boolean for the new
checking_timer field. However, it looks like this applies on an old
kernel. For example, the spin_lock field has already been removed from
the structure.

> But when it comes to really understanding this code, I'm struggling.
> It seems that this changes the return value of of fastpath_timer_check.
> I'm tryng to wrap my head around exactly why that is safe.
> 
> Any time I see things like READ_ONCE and WRITE_ONCE, I wonder if there
> need to be memory barriers as well.  (Because x86 has strong default
> memory ordering, testing there doesn't prove the code right on other
> platforms.)
> 
> For the writes, the surrounding spinlock does the job.
> 
> But on the read side (fastpath_timer_check), I'm not sure
> what the rules should be.
> 
> Or is it basically okay if this is massively racey, since process-wide
> CPU timers are inherently sloppy.  A race will just cause an expiration
> check to be missed, but it will be retried soon anyway.

Yes, the worst case scenario is that we wait until the next thread to
come along and handle the next expired timer. However, this "delay"
already occurs now (for example: a timer can expire right after a thread
exits check_process_timers()).

> Other half-baked thoughts that went through my head:
> 
> If the problem is that you have contention for read access to
> sig->cputimer.cputime, can that be changed to a seqlock?
> 
> Another possibility is to use a trylock before testing
> checking_timer:
> 
> 	if (sig->cputimer.running) {
> 		struct task_cputime group_sample;
> 
> -		raw_spin_lock(&sig->cputimer.lock);
> +		if (!raw_spin_trylock(&sig->cputimer.lock)) {
> +			if (READ_ONCE(sig->cputimer.checking_timer))
> +				return false;	/* Lock holder's doing it for us */
> +			raw_spin_lock(&sig->cputimer.lock);
> +		}

The spinlock call has already been removed from a previous patch. The
issue now is with contention with the sighand lock.

Thanks,
Jason

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ