lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090122202550.GA5726@redhat.com>
Date:	Thu, 22 Jan 2009 21:25:50 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Matthew Wilcox <matthew@....cx>,
	Chuck Lever <cel@...i.umich.edu>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC v4] wait: prevent waiter starvation in __wait_on_bit_lock

On 01/21, Johannes Weiner wrote:
>
> @@ -187,6 +187,31 @@ __wait_on_bit_lock(wait_queue_head_t *wq, struct wait_bit_queue *q,
>  		}
>  	} while (test_and_set_bit(q->key.bit_nr, q->key.flags));
>  	finish_wait(wq, &q->wait);
> +	if (unlikely(ret)) {
> +		/*
> +		 * Contenders are woken exclusively.  If we were woken
> +		 * by an unlock we have to take the lock ourselves and
> +		 * wake the next contender on unlock.  But the waiting
> +		 * function failed, we do not take the lock and won't
> +		 * unlock in the future.  Make sure the next contender
> +		 * does not wait forever on an unlocked bit.
> +		 *
> +		 * We can also get here without being woken through
> +		 * the waitqueue, so there is a small chance of doing a
> +		 * bogus wake up between an unlock clearing the bit and
> +		 * the next contender being woken up and setting it again.
> +		 *
> +		 * It does no harm, though, the scheduler will ignore it
> +		 * as the process in question is already running.
> +		 *
> +		 * The unlock path clears the bit and then wakes up the
> +		 * next contender.  If the next contender is us, the
> +		 * barrier makes sure we also see the bit cleared.
> +		 */
> +		smp_rmb();
> +		if (!test_bit(q->key.bit_nr, q->key.flags)))
> +			__wake_up_bit(wq, q->key.flags, q->key.bit_nr);

I think this is correct, and (unfortunately ;) you are right:
we need rmb() even after finish_wait().

And we have to check ret twice, and the false wakeup is still
possible. This is minor, but just for discussion, can't we do
this differently?

	int finish_wait_xxx(wait_queue_head_t *q, wait_queue_t *wait)
	{
		unsigned long flags;
		int woken;

		__set_current_state(TASK_RUNNING);
		spin_lock_irqsave(&q->lock, flags);
		woken = list_empty(&wait->task_list);
		list_del_init(&wait->task_list);
		spin_unlock_irqrestore(&q->lock, flags);

		return woken;
	}

Now, __wait_on_bit_lock() does:

		if (test_bit(q->key.bit_nr, q->key.flags)) {
			if ((ret = (*action)(q->key.flags))) {
				if (finish_wait_xxx(...))
					__wake_up_bit(...);
				return ret;
			}
		}

Or we can introduce

	int finish_wait_yyy(wait_queue_head_t *q, wait_queue_t *wait,
				int mode, void *key)
	{
		unsigned long flags;
		int woken;

		__set_current_state(TASK_RUNNING);
		spin_lock_irqsave(&q->lock, flags);
		woken = list_empty(&wait->task_list);
		if (woken)
			__wake_up_common(q, mode, 1, key);
		else
			list_del_init(&wait->task_list);
		spin_unlock_irqrestore(&q->lock, flags);

		return woken;
	}

Perhaps a bit too much for this particular case, but I am thinking
about other cases when we need to abort the exclusive wait.

For example, don't we have the similar problems with
wait_event_interruptible_exclusive() ?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ