[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1190398453.6690.12.camel@heimdal.trondhjem.org>
Date: Fri, 21 Sep 2007 14:14:13 -0400
From: Trond Myklebust <Trond.Myklebust@...app.com>
To: Chakri n <chakriin5@...il.com>
Cc: nfs@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [NFS] NFS on loopback locks up entire system(2.6.23-rc6)?
No. The requirement for 'hard' mounts is not that the server be up all
the time. The server can go up and down as it pleases: the client can
happily recover from that.
The requirement is rather that nobody remove it permanently before the
application is done with it, and the partition is unmounted. That is
hardly unreasonable (it is the only way I know of to ensure data
integrity), and it is much less strict than the requirements for local
disks.
Trond
On Fri, 2007-09-21 at 11:06 -0700, Chakri n wrote:
> Isn't this a strict requirement from client side, asking to guarantee
> that a server stays up all the time?
>
> I have seen many cases, where people go and directly change IP of
> their NFS filers or servers worrying least about the clients using
> them.
>
> Can we get around with some sort of congestion logic?
>
> Thanks
> --Chakri
>
> On 9/21/07, Trond Myklebust <Trond.Myklebust@...app.com> wrote:
> > On Fri, 2007-09-21 at 09:20 -0700, Chakri n wrote:
> > > Thanks.
> > >
> > > I was using flock (BSD locking) and I think the problem should be
> > > solved if I move my application to use POSIX locks.
> >
> > Yup.
> >
> > > And any option to avoid processes waiting indefinitely to free pages
> > > from NFS requests waiting on unresponsive NFS server?
> >
> > The only solution I know of is to use soft mounts, but that brings
> > another set of problems:
> > 1. most applications don't know how to recover safely from an EIO
> > error.
> > 2. You lose data.
> >
> > Cheers
> > Trond
> >
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists