lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080801012747.7E17F15427E@magilla.localdomain>
Date:	Thu, 31 Jul 2008 18:27:47 -0700 (PDT)
From:	Roland McGrath <roland@...hat.com>
To:	Oleg Nesterov <oleg@...sign.ru>
Cc:	akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
	mingo@...e.hu, linux-kernel@...r.kernel.org
Subject: Re: Q: wait_task_inactive() and !CONFIG_SMP && CONFIG_PREEMPT

> I dont think this is right.
> 
> Firstly, the above always fails if match_state == 0, this is not right.

A call with 0 is the "legacy case", where the return value is 0 and nothing
but the traditional wait_task_inactive behavior is expected.  On UP, this
was a nop before and still is.

Anyway, this is moot since we are soon to have no callers that pass 0.

> But more importantly, we can't just check ->state == match_state. And
> preempt_disable() buys nothing.

It ensures that the samples of ->state and ->nvcsw both came while the
target could never have run in between.  Without it, a preemption after the
->state check could mean the ->nvcsw value we use is from a later block in
a different state than the one intended.

> Let's look at task_current_syscall(). The "target" can set, say,
> TASK_UNINTERRUPTIBLE many times, do a lot of syscalls, and not once
> call schedule().
> 
> And the task remains fully preemptible even if it runs in
> TASK_UNINTERRUPTIBLE state.

One of us is missing something basic.  We are on the only CPU.  If target
does *anything*, it means we got preempted, then target switched in, did
things, and then called schedule (including via preemption)--so that we
could possibly be running again now afterwards.  That schedule call bumped
the counter after we sampled it.  The second call done for "is it still
blocked afterwards?" will see a different count and abort.  Am I confused?

Ah, I think it was me who was missing something when I let you talk me into
checking only ->nvcsw.  It really should be ->nivcsw + ->nvcsw as I had it
originally (| LONG_MIN as you've done, a good trick).  That makes what I
just said true in the preemption case.  This bit:

	if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) {

will not hit, so switch_count = &prev->nivcsw; remains from before.
This is why it was nivcsw + nvcsw to begin with.

What am I missing here?


Thanks,
Roland
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ