lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070601115644.GA30328@lazybastard.org>
Date:	Fri, 1 Jun 2007 13:56:44 +0200
From:	Jörn Engel <joern@...ybastard.org>
To:	Anton Altaparmakov <aia21@....ac.uk>
Cc:	Andrew Morton <akpm@...l.org>, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Dave Kleikamp <shaggy@...ux.vnet.ibm.com>,
	David Chinner <dgc@....com>, Al Viro <viro@....linux.org.uk>,
	Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH resend] introduce I_SYNC

On Fri, 1 June 2007 09:59:17 +0100, Anton Altaparmakov wrote:
> 
> I agree that your patch is a good idea.  I reviewed the latest  
> incarnation and it makes sense to me.  And your comment concerning  

Thanks.

> the flags is a very welcome addition.  Probably ought to find its way  
> into Documentation/filesystems/Locking or vfs.txt or somewhere like  
> that also.

Might make sense.  Right now I would be more interested in getting the
questions at the bottom answered.  Possibly the right answer might be
"here is a patch to fix it".

> Note that once your patch is applied I think it would make sense to  
> follow up with a second patch to remove the I_LOCK flag completely.   
> The only remaining uses are either together with I_NEW in which case  
> I_LOCK can be removed altogether or can be substituted with I_NEW  
> when only I_LOCK is used.  This is because no places remain where we  
> set I_LOCK by itself any more with your patch.  The only place where  
> we set it is the place where a new inode gets created in memory and  
> in that place we also set I_NEW at the same time as I_LOCK.   
> wait_on_inode() can then be changed to wait on I_NEW instead of on  
> I_LOCKED.  That way we have one less confusing flag to worry about  
> and things are much easier to understand.

True.  Waiting on I_NEW would be equivalent to waiting on I_LOCK.  To
some degree I still prefer the current method.  I_NEW is a state, while
I_LOCK is a lock or completion method.  Having a confusing mix of
state/lock/completion bits is bad enough.  Having such a mix of uses for
a single bit could be even worse.

> >I still suspect that NTFS has hit the same deadlock and its current  
> >"fix" will cause data corruption instead.
> 
> The NTFS "fix" will not cause data corruption at all.  The usage in  
> NTFS is very different...  I am afraid your patch does not address  
> the deadlock with NTFS or rather it only addresses the inode write  
> deadlock and does not address the get_new_inode() deadlock that  
> exists with ilookup5() and is avoided by ilookup5_nowait().  This  
> deadlock is inherent to what NTFS does so you don't need to worry  
> about it.  (If you want I am happy to explain it but I would rather  
> not waste my time explaining if no-one except me cares about it...)

Two seperate deadlocks exist, we agree on this.  I_SYNC only solves one
of the two.  LogFS solved the second deadlock by implementing its own
destroy_inode() and drop_inode() methods.  Any inodes that would cause
the get_new_inode() deadlock get cached and logfs implements its own
iget() method to return a cached inode from any deadlock-prone codepath.

It is not a pretty solution, but ilookup_nowait() would definitely cause
data corruption for logfs, so that was not an option.  Better ideas are
very welcome.

Jörn

-- 
The strong give up and move away, while the weak give up and stay.
-- unknown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ