lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E9E9255.7090601@parallels.com>
Date:	Wed, 19 Oct 2011 13:03:17 +0400
From:	Pavel Emelyanov <xemul@...allels.com>
To:	Tejun Heo <tj@...nel.org>
CC:	Cyrill Gorcunov <gorcunov@...nvz.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Andrey Vagin <avagin@...allels.com>,
	James Bottomley <jbottomley@...allels.com>,
	Glauber Costa <glommer@...allels.com>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Daniel Lezcano <dlezcano@...ibm.com>,
	Alexey Dobriyan <adobriyan@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Oleg Nesterov <oleg@...hat.com>
Subject: Re: [patch 5/5] elf: Add support for loading ET_CKPT files

On 10/14/2011 09:33 PM, Tejun Heo wrote:
> Hello, again.

Hi!

Sorry for delay in response. I will answer on both of your concerns at once.

> I don't think this is a good idea.  We already have most of interface
> necessary for restoring task state and there's no need to put it into
> the kernel as one piece.  If there's something which can't be done
> from userland using existing interfaces, let's please discuss what
> they are and whether they can be resolved differently first.

The rest API we will require will look too special-purpose (like Cyrill
mentioned the ability to set the mm's fields such as start_code, etc).

Besides, putting a parasite code to restore the memory state will drive
us into certain limitations. E.g. - what address range in the target task
address space should the parasite occupy? What if we fail in finding one,
how should we act next?

> The exec path is very intricate as it is and it would be very easy to
> introduce security or other bugs by altering its basic behavior.  

Can you elaborate a bit more on this? "Other" bugs can be introduced by
any piece of code injected into the kernel, but what about security?

> exec presumes destruction of (most of) the current process and all its
> other threads and replacing them w/ fresh states from an executable.

This is *exactly* what is expected from the restoration process.

> The scary part - interaction with process hierarchy and zapping of the
> current state - is handled from the core exec code.

Right.

> I see that you removed zapping to allow restoring multi-threaded
> process, which seems quite scary to me.  It might be okay, I don't
> know, but IMHO it just isn't a very good idea to introduce such
> variation to basic exec behavior for this rather narrow use case.

Why? Can you describe your thought on this more, maybe I'm really missing
smth important.

> In addition, leaving it alone doesn't really solve multi-threaded
> restoring, does it?  

So your main concern is about multy-threaded tasks, right? I think we can
prepare the exec handler capable to show how this can look like.

> Who sets the task states for other threads? 

Which states? Pids and other IDs? Threads themselves. Memory state? The
exec handler itself. Registers? My plan is the same - the exec handler.

> The current code doesn't seem to be doing anything but if you're gonna add
> it later, how are you gonna synchronize other threads?

Synchronize what? If we're providing to a userspace a way to put things right
it's up to userspace to use it correctly. Like we saw this wit pid namespaces -
there's are plenty of ways how to kill the app with CLONE_NEWPID clone flag, but
the intention of one is not the be 100% fool-proof, but to provide an ability
to do what userspace need.

One of the abilities to restore multy-threaded task can be - clone threads from
the userspace, then wait for them to prepare themselves the way they need it,
then the threads go to sleep and the leader one calls execve. Thus the userspace
state consistency is OK.

> If not, what's the point of pushing this chunk in when userland task state
> restoration is necessary anyway?
> 
> So, at the moment, I'm rather strongly against this approach.


> On Fri, Oct 14, 2011 at 10:10:33AM -0700, Tejun Heo wrote:
>> I see that you removed zapping to allow restoring multi-threaded
>> process, which seems quite scary to me.  It might be okay, I don't
>> know, but IMHO it just isn't a very good idea to introduce such
>> variation to basic exec behavior for this rather narrow use case.
> 
> * It's still calling exec_mmap(), so the exec'ing thread will be on
>   the new mm while other threads would still be on the old one.

Why do you think it's a problem?

> * There are operations which assume that the calling task is
>   de-threaded and thus has exclusive access to data structures which
>   can be shared among the thread group (e.g. signal struct).  How are
>   accesses to them synchronized?

If we're talking about keeping the userspace state solid, then stopping the
other threads solves this.

> * What about properties which should be reset and then shared by
>   threads by inheriting (credentials, comm, self_exec_id and so on)?
>   e.g. new thread follows exec-time imposed rules but old ones keep
>   their original credentials.

I'm looking at the sys_setresuid and see that threads have independent creds
and thus they should be restored *before* calling exec.

> So, no, it really doesn't look good.

That's a pity. As I stated above, we'll try to demonstrate the exec handler
capable to restore the multy-threaded APP and send the 2nd RFC. Can you answer
my questions above so we keep your concerns in mind while doing this, please.

> Thanks.

Thank you for your feedback!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ