lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Feb 2015 11:05:44 -0600
From:	Josh Poimboeuf <jpoimboe@...hat.com>
To:	Jiri Kosina <jkosina@...e.cz>
Cc:	Vojtech Pavlik <vojtech@...e.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...hat.com>,
	Seth Jennings <sjenning@...hat.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sched: add sched_task_call()

On Fri, Feb 20, 2015 at 09:49:32AM +0100, Jiri Kosina wrote:
> Alright, so to sum it up:
> 
> - current stack dumping (even looking at /proc/<pid>/stack) is not 
>   guaranteed to yield "correct" results in case the task is running at the 
>   time the stack is being examined
> 
> - the only fool-proof way is to send IPI-NMI to all CPUs, and synchronize 
>   the handlers between each other (to make sure that reschedule doesn't 
>   happen in between on some CPU and other task doesn't start running in 
>   the interim). 
>   The NMI handler dumps its current stack in case it's running in context 
>   of the process whose stack is to be dumped. Otherwise, one of the NMI 
>   handlers looks up the required task_struct, and dumps it if it's not 
>   running on any CPU
> 
> - For live patching use-case, the stack has to be analyzed (and decision 
>   on what to do based on the analysis) in the NMI handler itself, 
>   otherwise it gets racy again
> 
> Converting /proc/<pid>/stack to this mechanism seems like a correct thing 
> to do in any case, as it's slow path anyway.
> 
> The original intent seemed to have been to make this fast path for the 
> live patching case, but that's probably not possible, so it seems like the 
> price that will have to be paid for being able to finish live-patching of 
> CPU-bound processess is the cost of IPI-NMI broadcast.

Hm, syncing IPI's among CPUs sounds pretty disruptive.

This is really two different issues, so I'll separate them:

1. /proc/pid/stack for running tasks

I haven't heard anybody demanding that /proc/<pid>/stack should actually
print the stack for running tasks.  My suggestion was just that we avoid
the possibility of printing garbage.

Today's behavior for a running task is (usually): 

  # cat /proc/802/stack
  [<ffffffffffffffff>] 0xffffffffffffffff

How about, when we detecting a running task, just always show that?
That would give us today's behavior, except without occasionally
printing garbage, while avoiding all the overhead of syncing IPI's.

2. live patching of running tasks

I don't see why we would need to sync IPI's to patch CPU-bound
processes.  Why not use context tracking or the TIF_USERSPACE flag like
I mentioned before?

-- 
Josh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ