[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081031103722.GQ15171@hawkmoon.kerlabs.com>
Date: Fri, 31 Oct 2008 11:37:22 +0100
From: Louis Rilling <Louis.Rilling@...labs.com>
To: Oren Laadan <orenl@...columbia.edu>
Cc: Andrey Mirkin <major@...nvz.org>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
"Serge E. Hallyn" <serue@...ibm.com>,
Cedric Le Goater <clg@...ibm.com>,
Daniel Lezcano <dlezcano@...ibm.com>,
containers@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [Devel] Re: [PATCH 0/9] OpenVZ kernel based
checkpointing/restart
On Thu, Oct 30, 2008 at 02:32:51PM -0400, Oren Laadan wrote:
>
>
> Louis Rilling wrote:
> > On Thu, Oct 30, 2008 at 01:45:25PM -0400, Oren Laadan wrote:
> >>
> >> Louis Rilling wrote:
> >>> In Kerrighed this is kernel-based, and will remain kernel-based because we
> >>> checkpoint a distributed task tree, and want to restart it as mush as possible
> >>> with the same distribution. The distributed protocol used for restart is
> >>> currently too fragile and complex to rely on customized user-space
> >>> implementations. That said, if someone brings very good arguments in favor of
> >>> userspace implementations, we might consider changing this.
> >> Zap also has distributed checkpoint which does not require strict
> >> kernel-side ordering. Do you need that because you do SSI ?
> >
> > Yes. Tasks from different nodes have parent-children, session leader, etc.
> > relationships, and the distributed management of struct pid lifecycle is a bit
> > touchy too. By the way, splitting the checkpoint image in one file for each task
> > helps us a lot to make restart parallel, because it is more efficient for the file
> > system to handle parallel reads of different files from different nodes than
> > parallel reads on a single file descriptor from different nodes.
>
> You can also make parallel restart work with the single stream, without
> much effort. Particularly if you store everything on the file system.
Sure we can use a single stream, since we already share file descriptors accross
nodes. But the distributed synchronization of the file pointer is costly
compared to having each node access different files. This way we push the
parallelization bottelneck down to the file system rather than in the
distributed VFS layer.
>
> In both cases, the limiting factor is shared resources - where one task
> cannot proceed with checkpoint because it waits for another task to first
> (re)create that resource.
We just try to avoid other bottlenecks :) And besides file descriptors, shared
resources are as common as multi-threaded programs, which are not the majority
of the workloads we can address.
Louis
--
Dr Louis Rilling Kerlabs
Skype: louis.rilling Batiment Germanium
Phone: (+33|0) 6 80 89 08 23 80 avenue des Buttes de Coesmes
http://www.kerlabs.com/ 35700 Rennes
Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)
Powered by blists - more mailing lists