lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 28 Apr 2010 14:36:54 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Don Zickus <dzickus@...hat.com>
Cc:	mingo@...e.hu, peterz@...radead.org, gorcunov@...il.com,
	aris@...hat.com, linux-kernel@...r.kernel.org,
	randy.dunlap@...cle.com
Subject: Re: [PATCH 1/8] [watchdog] combine nmi_watchdog and softlockup

On Fri, Apr 23, 2010 at 12:13:29PM -0400, Don Zickus wrote:
> +void watchdog_overflow_callback(struct perf_event *event, int nmi,
> +		 struct perf_sample_data *data,
> +		 struct pt_regs *regs)
> +{
> +	int this_cpu = smp_processor_id();
> +	unsigned long touch_ts = per_cpu(watchdog_touch_ts, this_cpu);
> +	char warn = __get_cpu_var(watchdog_warn);
> +
> +	if (touch_ts == 0) {
> +		__touch_watchdog();
> +		return;
> +	}
> +
> +	/* check for a hardlockup
> +	 * This is done by making sure our timer interrupt
> +	 * is incrementing.  The timer interrupt should have
> +	 * fired multiple times before we overflow'd.  If it hasn't
> +	 * then this is a good indication the cpu is stuck
> +	 */
> +	if (is_hardlockup(this_cpu)) {
> +		/* only print hardlockups once */
> +		if (warn & HARDLOCKUP)
> +			return;
> +
> +		if (hardlockup_panic)
> +			panic("Watchdog detected hard LOCKUP on cpu %d", this_cpu);
> +		else
> +			WARN(1, "Watchdog detected hard LOCKUP on cpu %d", this_cpu);
> +
> +		__get_cpu_var(watchdog_warn) = warn | HARDLOCKUP;
> +		return;
> +	}
> +
> +	__get_cpu_var(watchdog_warn) = warn & ~HARDLOCKUP;
> +	return;
> +}
[...]
> +static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
> +{
> +	int this_cpu = smp_processor_id();
> +	unsigned long touch_ts = __get_cpu_var(watchdog_touch_ts);
> +	char warn = __get_cpu_var(watchdog_warn);
> +	struct pt_regs *regs = get_irq_regs();
> +	int duration;
> +
> +	/* kick the hardlockup detector */
> +	watchdog_interrupt_count();
> +
> +	/* kick the softlockup detector */
> +	wake_up_process(__get_cpu_var(softlockup_watchdog));
> +
> +	/* .. and repeat */
> +	hrtimer_forward_now(hrtimer, ns_to_ktime(get_sample_period()));
> +
> +	if (touch_ts == 0) {
> +		__touch_watchdog();
> +		return HRTIMER_RESTART;
> +	}
> +
> +	/* check for a softlockup
> +	 * This is done by making sure a high priority task is
> +	 * being scheduled.  The task touches the watchdog to
> +	 * indicate it is getting cpu time.  If it hasn't then
> +	 * this is a good indication some task is hogging the cpu
> +	 */
> +	duration = is_softlockup(touch_ts, this_cpu);
> +	if (unlikely(duration)) {
> +		/* only warn once */
> +		if (warn & SOFTLOCKUP)
> +			return HRTIMER_RESTART;
> +
> +		printk(KERN_ERR "BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
> +			this_cpu, duration,
> +			current->comm, task_pid_nr(current));
> +		print_modules();
> +		print_irqtrace_events(current);
> +		if (regs)
> +			show_regs(regs);
> +		else
> +			dump_stack();
> +
> +		if (softlockup_panic)
> +			panic("softlockup: hung tasks");
> +		__get_cpu_var(watchdog_warn) = warn | SOFTLOCKUP;
> +	} else
> +		__get_cpu_var(watchdog_warn) = warn & ~SOFTLOCKUP;


Note these watchdog_warn modifications are racy against the same that
happens with HARDLOCKUP. You might clear what did the nmi.

The race is harmless enough that we don't care much I think, but that's
why it would have make sense to separate watchdog_warn tracking space
between both.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ