[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4d012a025304b1f75@agluck-desktop.sc.intel.com>
Date: Thu, 09 Dec 2010 11:12:02 -0800
From: "Luck, Tony" <tony.luck@...el.com>
To: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Cc: tglx@...utronix.de, mingo@...e.hu, greg@...ah.com,
akpm@...ux-foundation.org, ying.huang@...el.com,
"Borislav Petkov" <bp@...en8.de>,
"David Miller" <davem@...emloft.net>,
"Alan Cox" <alan@...rguk.ukuu.org.uk>,
"Jim Keniston" <jkenisto@...ux.vnet.ibm.com>,
"Kyungmin Park" <kmpark@...radead.org>,
"Geert Uytterhoeven" <geert@...ux-m68k.org>
Subject: Re: [RFC] persistent store (version 3) (part 1 of 2)
> This upsets the traditional layout of having the error
> recovery part of the function undo all the things that
> we did leading up to the error. Pity, because your
> version is easier to read.
But most of your "move the error fixups to the tail, so the
normal code path is easier to follow" bit do still hold and
can be used. How about this:
-Tony
---
int pstore_mkfile(char *name, char *data, size_t size, struct timespec time,
void *private)
{
struct dentry *root = pstore_sb->s_root;
struct dentry *dentry;
struct inode *inode;
int rc;
rc = -ENOMEM;
inode = pstore_get_inode(pstore_sb, root->d_inode, S_IFREG | 0444, 0);
if (!inode)
goto fail;
inode->i_private = private;
mutex_lock(&root->d_inode->i_mutex);
rc = -ENOSPC;
dentry = d_alloc_name(root, name);
if (IS_ERR(dentry))
goto fail_alloc;
d_add(dentry, inode);
mutex_unlock(&root->d_inode->i_mutex);
if (!pstore_writefile(inode, dentry, data, size))
goto fail_write;
if (time.tv_sec)
inode->i_mtime = inode->i_ctime = time;
return 0;
fail_write:
inode->i_nlink--;
mutex_lock(&root->d_inode->i_mutex);
d_delete(dentry);
dput(dentry);
mutex_unlock(&root->d_inode->i_mutex);
goto fail;
fail_alloc:
mutex_unlock(&root->d_inode->i_mutex);
iput(inode);
fail:
return rc;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists