lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100203194350.GA13824@redhat.com>
Date:	Wed, 3 Feb 2010 20:43:50 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Simon Kagstrom <simon.kagstrom@...insight.net>
Cc:	linux-kernel@...r.kernel.org, laijs@...fujitsu.com,
	rusty@...tcorp.com.au, tj@...nel.org, akpm@...ux-foundation.org,
	mingo@...e.hu
Subject: Re: [PATCH] core: workqueue: BUG_ON on workqueue recursion

On 02/03, Simon Kagstrom wrote:
>
> When the workqueue is flushed from workqueue context (recursively), the
> system enters a strange state where things at random (dependent on the
> global workqueue) start misbehaving. For example, for us the console and
> logins locks up while the web server continues running.
>
> Since the system becomes unstable, change this to a BUG_ON instead.

I agree with this patch. We are going to deadlock anyway, if the
condition is true the caller is cwq->current_work, this means
flush_cpu_workqueue() will insert the barrier and hang.

However,

> @@ -482,7 +482,7 @@ static int flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
>  	int active = 0;
>  	struct wq_barrier barr;
>
> -	WARN_ON(cwq->thread == current);
> +	BUG_ON(cwq->thread == current);

Another option is change the code to do

	if (WARN_ON(cwq->thread == current))
		return;

This gives the kernel chance to survive after the warning.

What do you think?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ