lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jul 2008 16:39:15 -0700 (PDT)
From:	Roland McGrath <roland@...hat.com>
To:	Oleg Nesterov <oleg@...sign.ru>
Cc:	akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
	mingo@...e.hu, linux-kernel@...r.kernel.org
Subject: Re: Q: wait_task_inactive() and !CONFIG_SMP && CONFIG_PREEMPT

> If it is preempted by the parent which does ptrace_check_attach(),
> wait_task_inactive() must wait until the child leaves the runqueue,
> but the dummy version just returns success.

I see your point.

> sys_ptrace() continues assuming that the child sleeps in TASK_TRACED,
> while it fact it is running, despite its ->state == TASK_TRACED.

For ptrace, the only real expectation has ever been that it's no longer on
the physical CPU, i.e. we are not racing with context switch itself.  On a
uniprocessor, this can of course never happen.

The historical picture is that the preemption issue wasn't thought about
much.  ptrace has always used lock_kernel(), and mostly this implied
disabling preemption anyway (there was CONFIG_PREEMPT_BKL for a while).
So it's moot there.

Even if preemption were an issue for ptrace, it's not a problem.  All that
matters is that the tracee is not going to run any code that changes the
thread machine state ptrace accesses (pt_regs, thread.foo, etc).  If ptrace
gets preempted, the tracee gets switched in, gets switched back out, and
the ptrace-calling thread switched back in, there is no problem.  All the
flutter on the kernel memory ptrace might touch took place during the
context switches themselves, and every byte was back in the same place
between when ptrace got preempted and when it resumed its next instruction.

I can't speak to the kthread case.  I suspect that set_task_cpu() is always
safe on !SMP PREEMPT, and that's why it's fine.  But I'm not really sure.

> I refer to this patch of the comment:
> 
> 	If a second call a short while later returns the same number, the
> 	caller can be sure that @p has remained unscheduled the whole time.
> 
> The dummy version always returns the same number == 1.

Right.  For the general case where this is the contract wait_task_inactive
is expected to meet, it does matter.  I think task_current_syscall() does
want this checked for the preempted uniprocessor case, for example.

> So. I think that wait_task_inactive() needs "defined(SMP) || defined(PREEMPT)"
> and the dummy version should return ->nvcsw too.

Is this what we want?

	#ifdef CONFIG_SMP
	extern unsigned long wait_task_inactive(struct task_struct *, long);
	#else
	static inline unsigned long wait_task_inactive(struct task_struct *p,
						       long match_state)
	{
		unsigned long ret = 0;
		if (match_state) {
			preempt_disable();
			if (p->state == match_state)
				ret = (p->nvcsw << 1) | 1;
			preempt_enable();
		}
		return ret;
	}
	#endif


Thanks,
Roland
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ