lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20091027101534.GA27584@skywalker.linux.vnet.ibm.com> Date: Tue, 27 Oct 2009 15:45:34 +0530 From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com> To: Theodore Tso <tytso@....edu>, Parag Warudkar <parag.lkml@...il.com>, LKML <linux-kernel@...r.kernel.org>, linux-ext4@...r.kernel.org, bugzilla-daemon@...zilla.kernel.org Subject: Re: [Bug 14354] Re: ext4 increased intolerance to unclean shutdown? Can you try this patch ? commit a8836b1d6f92273e001012c7705ae8f4c3d5fb65 Author: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com> Date: Tue Oct 27 15:36:38 2009 +0530 ext4: discard preallocation during truncate We need to make sure when we drop and reacquire the inode's i_data_sem we discard the inode preallocation. Otherwise we could have blocks marked as free in bitmap but still belonging to prealloc space. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 5c5bc5d..a1ef1c3 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -209,6 +209,12 @@ static int try_to_extend_transaction(handle_t *handle, struct inode *inode) up_write(&EXT4_I(inode)->i_data_sem); ret = ext4_journal_restart(handle, blocks_for_truncate(inode)); down_write(&EXT4_I(inode)->i_data_sem); + /* + * We have dropped i_data_sem. So somebody else could have done + * block allocation. So discard the prealloc space created as a + * part of block allocation + */ + ext4_discard_preallocations(inode); return ret; } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists