lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110922145257.GA13960@redhat.com>
Date:	Thu, 22 Sep 2011 16:52:57 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Mike Galbraith <efault@....de>,
	linux-rt-users <linux-rt-users@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Miklos Szeredi <miklos@...redi.hu>, mingo <mingo@...hat.com>
Subject: Re: rt14: strace ->  migrate_disable_atomic imbalance

On 09/22, Peter Zijlstra wrote:
>
> +static void wait_task_inactive_sched_in(struct preempt_notifier *n, int cpu)
> +{
> +	struct task_struct *p;
> +	struct wait_task_inactive_blocked *blocked =
> +		container_of(n, struct wait_task_inactive_blocked, notifier);
> +
> +	hlist_del(&n->link);
> +
> +	p = ACCESS_ONCE(blocked->waiter);
> +	blocked->waiter = NULL;
> +	wake_up_process(p);
> +}
> ...
> +static void
> +wait_task_inactive_sched_out(struct preempt_notifier *n, struct task_struct *next)
> +{
> +	if (current->on_rq) /* we're not inactive yet */
> +		return;
> +
> +	hlist_del(&n->link);
> +	n->ops = &wait_task_inactive_ops_post;
> +	hlist_add_head(&n->link, &next->preempt_notifiers);
> +}

Tricky ;) Yes, the first ->sched_out() is not enough.

>  unsigned long wait_task_inactive(struct task_struct *p, long match_state)
>  {
> ...
> +	rq = task_rq_lock(p, &flags);
> +	trace_sched_wait_task(p);
> +	if (!p->on_rq) /* we're already blocked */
> +		goto done;

This doesn't look right. schedule() clears ->on_rq a long before
__switch_to/etc.

And it seems that we check ->on_cpu above, this is not UP friendly.

>
> -			set_current_state(TASK_UNINTERRUPTIBLE);
> -			schedule_hrtimeout(&to, HRTIMER_MODE_REL);
> -			continue;
> -		}
> +	hlist_add_head(&blocked.notifier.link, &p->preempt_notifiers);
> +	task_rq_unlock(rq, p, &flags);

I thought about reimplementing wait_task_inactive() too, but afaics there
is a problem: why we can't race with p doing register_preempt_notifier() ?
I guess register_ needs rq->lock too.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ