[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070730192954.GA7363@localhost>
Date: Mon, 30 Jul 2007 21:35:33 +0200
From: Lars Ellenberg <lars.ellenberg@...bit.com>
To: Pavel Machek <pavel@....cz>
Cc: Jan Engelhardt <jengelh@...putergmbh.de>,
Jens Axboe <axboe@...nel.dk>, Andrew Morton <akpm@...l.org>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [DRIVER SUBMISSION] DRBD wants to go mainline
On Fri, Jul 27, 2007 at 06:46:17PM +0000, Pavel Machek wrote:
> Hi!
>
> > > >We implement shared-disk semantics in a shared-nothing cluster.
> > >
> > > If nothing is shared, the disk is not shared, but got shared-disk
> > > semantics? A little confusing.
> >
> > Think of it as RAID1 over TCP.
> > Typically you have one Node in Primary, the other as Secondary,
> > replication target only.
>
> I guess TCP means people should not swap over it?
people should not swap over DRBD,
because it would not be useful.
DRBD is to have applicaction data available for more than one node,
without a single point of failure; when the node the app currently runs
on crashes, the data is there so some other node can take over from there.
what would you do with the swap of a crashed node, apart from,
well, crash analysis?
you don't need it to be highly available for that.
besides, yes, when you have network io in the block io path,
with linux (and probably most other OSes),
there is the posibility of vm starvation or even deadlock.
situation is improving, though - iirc, there has been talk to
have a emergency memory pool very low level in the network stack,
and some special "I am doing block-io" socket flag;
what is the status of that, anyone?
I belive DRBD behaves very good even in oom situations.
we considered these things from the very beginning.
that said,
I did not see a DRBD cluster hanging hard in OOM, yet.
and we do operate quite a few busy
database/mail/web/file/application/iSCSI/whatever clusters.
Lars
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists