[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090212112220.GA29185@fogou.chygwyn.com>
Date: Thu, 12 Feb 2009 11:22:20 +0000
From: steve@...gwyn.com
To: Kirill Kuvaldin <kirill.kuvaldin@...il.com>
Cc: linux-cluster@...hat.com, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: GFS2 file locking issues
Hi,
On Wed, Feb 11, 2009 at 08:57:12PM +0300, Kirill Kuvaldin wrote:
> On Wed, Feb 11, 2009 at 8:18 PM, <steve@...gwyn.com> wrote:
> > cat /proc/mounts
>
> /dev/mapper/gfsc-lvol0 /gfs2 gfs2 rw,hostdata=jid=0:id=720897:first=1 0 0
>
> I also tried mounting explicitly specifying lockproto=lock_dlm, but it
> didn't help.
>
> Is my understanding of locking correct after all? Is that the case if
> the process makes flock on a file, no other processes including ones
> running on other cluster nodes could obtain a lock until the first
> writer releases it?
>
Yes, thats how it is supposed to work.
> >
> > It should be listed in the options. That will also tell you if localflocks
> > has been set as well. We do know of a bug in the GFS/GFS2 flock code though,
> > it ought to be using an interruptible wait and it doesn't at the moment.
> > Otherwise I don't know of any other issues.
> >
> > Which kernel version are you using?
>
> 2.6.18-92.el5xen (from CentOS 5.2)
>
>
> Kirill
The CentOS 5.2 kernel is rather old[*], I'd suggest using a
more recent kernel and gfs2-utils. Either something derrived
from a recent upstream (Linus) kernel such as Fedora, or
5.3 and upwards for CentOS. It might not cure this specific
issue, but it will cure a lot of other issues which you might
run across,
Steve.
[*] It was not an officially supported feature even in Red Hat
Enterprise Linux of the same version number. As a result any
bugs found have been fixed in 5.3 and up, rather than in 5.2
update releases.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists