lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Apr 2016 10:03:04 +0200
From:	Petr Mladek <pmladek@...e.com>
To:	Jiri Kosina <jikos@...nel.org>
Cc:	Jessica Yu <jeyu@...hat.com>, Josh Poimboeuf <jpoimboe@...hat.com>,
	Miroslav Benes <mbenes@...e.cz>, linux-kernel@...r.kernel.org,
	live-patching@...r.kernel.org, Vojtech Pavlik <vojtech@...e.com>
Subject: Re: sched: horrible way to detect whether a task has been preempted

On Fri 2016-04-08 09:05:28, Jiri Kosina wrote:
> On Thu, 7 Apr 2016, Jessica Yu wrote:
> 
> > > Alternatively, without eating up a TIF_ space, it'd be possible to push a
> > > magic contents on top of the stack in preempt_schedule_irq() (and pop it
> > > once we are returning from there), and if such magic value is detected, we
> > > just don't bother and claim unreliability.
> > 
> > Ah, but wouldn't we still have to walk through the frames (i.e. enter
> > the loop in patch 7/14) to look for the magic value in this approach?
> 
> The idea was that it'd be located at a place to which saved stack pointer 
> of the sleeping task is pointing to (or at a fixed offset from it).

It is an interesting idea but it looks even more hacky than checking
the frame pointers and return values.

Checking the stack might be an overkill but we already do this for all
the other patched functions.

The big advantage about checking the stack is that it does not add
any overhead to the scheduler code, does not eat any TIF flag or
memory. The overhead is only when we are migrating a task and it is
charged to a separate process.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ