lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Jul 2020 15:26:16 +0100
From:   Valentin Schneider <valentin.schneider@....com>
To:     peterz@...radead.org
Cc:     Oleg Nesterov <oleg@...hat.com>, Jiri Slaby <jirislaby@...nel.org>,
        Christian Brauner <christian.brauner@...ntu.com>,
        christian@...uner.io, "Eric W. Biederman" <ebiederm@...ssion.com>,
        Linux kernel mailing list <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...e.de>,
        Dave Jones <davej@...emonkey.org.uk>,
        Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: Re: 5.8-rc*: kernel BUG at kernel/signal.c:1917


On 20/07/20 14:17, peterz@...radead.org wrote:
> On Mon, Jul 20, 2020 at 01:20:26PM +0100, Valentin Schneider wrote:
>> On 20/07/20 12:26, peterz@...radead.org wrote:
>
>> > +	/*
>> > +	 * We must re-load prev->state in case ttwu_remote() changed it
>> > +	 * before we acquired rq->lock.
>> > +	 */
>> > +	tmp_state = prev->state;
>> > +	if (unlikely(prev_state != tmp_state)) {
>> > +		/*
>> > +		 * ptrace_{,un}freeze_traced() think it is cool to change
>> > +		 * ->state around behind our backs between TASK_TRACED and
>> > +		 *  __TASK_TRACED.
>> > +		 *
>> > +		 * This is safe because this, as well as any __TASK_TRACED
>> > +		 * wakeups are under siglock.
>> > +		 *
>> > +		 * For any other case, a changed prev_state must be to
>> > +		 * TASK_RUNNING, such that when it blocks, the load has
>> > +		 * happened before the smp_mb().
>> > +		 *
>> > +		 * Also see the comment with deactivate_task().
>> > +		 */
>> > +		SCHED_WARN_ON(tmp_state && (prev_state & __TASK_TRACED &&
>> > +					   !(tmp_state & __TASK_TRACED)));
>> > +
>>
>> IIUC if the state changed and isn't TASK_RUNNING it *has* to have
>> __TASK_TRACED, so can't that be
>>
>>   SCHED_WARN_ON(tmp_state && !(tmp_state & __TASK_TRACED));
>
> Suppose task->state == TASK_UNINTERRUPTIBLE, and task != current, and
> then someone goes and does task->state = __TASK_TRACED.
>
> That is, your statement is correct given the current code, but we also
> want to verify no new code comes along and does something 'creative'.
>

That is what I was trying to go for; AFAICT your approach only warns if
__TASK_TRACED gets removed between the two loads (IOW it was already
there). The way I was seeing it is:

- We only get here if prev_state != tmp_state; IOW we know we raced with
  something
- if (tmp_state), then it wasn't with ttwu_remote()
- thus it must only be with ptrace shenanigans, IOW __TASK_TRACED must be
  there.

Now, what I suggested still doesn't detect what you pointed out, or some
crazier thing that sets __TASK_TRACED *and* some other stuff. IIUC the
ptrace transformation does TASK_TRACED -> __TASK_TRACED, so we could have
it as:

  /* TODO: name me */
  #define foobar TASK_TRACED ^ __TASK_TRACED

  ...

  /* not TASK_RUNNING; check against allowed transformations
  SCHED_WARN_ON(tmp_state && ((prev_state ^ tmp_state) & ~foobar));


That said...

> Or is the heat getting to me?

... that may apply to me as well :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ