[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <835.1561539007@warthog.procyon.org.uk>
Date: Wed, 26 Jun 2019 09:50:07 +0100
From: David Howells <dhowells@...hat.com>
To: torvalds@...ux-foundation.org
cc: dhowells@...hat.com, iwienand@...hat.com,
linux-afs@...ts.infradead.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [GIT PULL] AFS fixes
Hi Linus,
Could you pull this please?
There are four patches:
(1) Fix the printing of the "vnode modified" warning to exclude checks on
files for which we don't have a callback promise from the server (and
so don't expect the server to tell us when it changes).
Without this, for every file or directory for which we still have an
in-core inode that gets changed on the server, we may get a message
logged when we next look at it. This can happen in bulk if, for
instance, someone does "vos release" to update a R/O volume from a R/W
volume and a whole set of files are all changed together.
We only really want to log a message if the file changed and the
server didn't tell us about it or we failed to track the state
internally.
(2) Fix accidental corruption of either afs_vlserver struct objects or the
the following memory locations (which could hold anything). The issue
is caused by a union that points to two different structs in struct
afs_call (to save space in the struct). The call cleanup code assumes
that it can simply call the cleanup for one of those structs if not
NULL - when it might be actually pointing to the other struct.
This means that every Volume Location RPC op is going to corrupt
something.
(3) Fix an uninitialised spinlock. This isn't too bad, it just causes a
one-off warning if lockdep is enabled when "vos release" is called,
but the spinlock still behaves correctly.
(4) Fix the setting of i_block in the inode. This causes du, for example,
to produce incorrect results, but otherwise should not be dangerous to
the kernel.
The in-kernel AFS client has been undergoing testing on opendev.org on one
of their mirror machines. They are using AFS to hold data that is then
served via apache, and Ian Wienand had reported seeing oopses, spontaneous
machine reboots and updates to volumes going missing. This patch series
appears to have fixed the problem, very probably due to patch (2), but it's
not 100% certain.
Reviewed-by: Jeffrey Altman <jaltman@...istor.com>
Tested-by: Marc Dionne <marc.dionne@...istor.com>
Tested-by: Ian Wienand <iwienand@...hat.com>
Powered by blists - more mailing lists