lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1107230917160.2702@ionos>
Date:	Sat, 23 Jul 2011 09:22:06 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Andrew Morton <akpm@...ux-foundation.org>
cc:	LKML <linux-kernel@...r.kernel.org>,
	John Stultz <john.stultz@...aro.org>,
	Ingo Molnar <mingo@...e.hu>,
	Ben Greear <greearb@...delatech.com>, stable@...nel.org
Subject: Re: [patch 2/3] rtc: Fix hrtimer deadlock

On Fri, 22 Jul 2011, Andrew Morton wrote:

> On Fri, 22 Jul 2011 09:12:51 -0000
> Thomas Gleixner <tglx@...utronix.de> wrote:
> 
> > Ben reported a lockup related to rtc. The lockup happens due to:
> > 
> > CPU0                                        CPU1
> > 
> > rtc_irq_set_state()			    __run_hrtimer()	
> >   spin_lock_irqsave(&rtc->irq_task_lock)    rtc_handle_legacy_irq();
> > 					      spin_lock(&rtc->irq_task_lock);
> >   hrtimer_cancel()
> >     while (callback_running);
> > 
> > So the running callback never finishes as it's blocked on
> > rtc->irq_task_lock.  
> > 
> > Use hrtimer_try_to_cancel() instead and drop rtc->irq_task_lock while
> > waiting for the callback. Fix this for both rtc_irq_set_state() and
> > rtc_irq_set_freq().
> > 
> > ...
> >
> > +static int rtc_update_hrtimer(struct rtc_device *rtc, int enabled)
> > +{
> > +	/*
> > +	 * We unconditionally cancel the timer here, because otherwise
> 
> The comment seems wrong.  If hrtimer_try_to_cancel() fails, we simply
> bale out so we did not "unconditionally cancel the timer"?

Well, what I meant is that we cancel it before we start it. That's
required for self rearming timers. Will reword.
 
> > +	 * we could run into BUG_ON(timer->state != HRTIMER_STATE_CALLBACK);
> > +	 * when we manage to start the timer before the callback
> > +	 * returns HRTIMER_RESTART.
> > +	 *
> > +	 * We cannot use hrtimer_cancel() here as a running callback
> > +	 * could be blocked on rtc->irq_task_lock and hrtimer_cancel()
> > +	 * would spin forever.
> > +	 */
> > +	if (hrtimer_try_to_cancel(&rtc->pie_timer) < 0)
> > +		return -1;
> > +
> > +	if (enabled) {
> > +		ktime_t period = ktime_set(0, NSEC_PER_SEC / rtc->irq_freq);
> > +
> > +		hrtimer_start(&rtc->pie_timer, period, HRTIMER_MODE_REL);
> > +	}
> > +	return 0;
> > +}
> > +
> >  /**
> >   * rtc_irq_set_state - enable/disable 2^N Hz periodic IRQs
> >   * @rtc: the rtc device
> > @@ -651,24 +674,21 @@ int rtc_irq_set_state(struct rtc_device 
> >  	int err = 0;
> >  	unsigned long flags;
> >  
> > +retry:
> >  	spin_lock_irqsave(&rtc->irq_task_lock, flags);
> >  	if (rtc->irq_task != NULL && task == NULL)
> >  		err = -EBUSY;
> >  	if (rtc->irq_task != task)
> >  		err = -EACCES;
> > -	if (err)
> > -		goto out;
> > -
> > -	if (enabled) {
> > -		ktime_t period = ktime_set(0, NSEC_PER_SEC/rtc->irq_freq);
> > -		hrtimer_start(&rtc->pie_timer, period, HRTIMER_MODE_REL);
> > -	} else {
> > -		hrtimer_cancel(&rtc->pie_timer);
> > +	if (!err) {
> > +		if (rtc_update_hrtimer(rtc, enabled) < 0) {
> > +			spin_unlock_irqrestore(&rtc->irq_task_lock, flags);
> > +			cpu_relax();
> > +			goto retry;
> > +		}
> > +		rtc->pie_enabled = enabled;
> 
> Well this is rather nasty.  Sort of an open-coded expensive spinlock. 
> All rather pointless on SMP=n builds, too.
> 
> Is there no better way, such as fixing up the locking properly?

Probably there is, but that requires a rather large patch and a
complete locking rewrite, nothing you want to push back into
stable. And we want this as the deadlock has been observed and
reported already.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ