lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Aug 2008 20:11:17 -0700
From:	Daniel Walker <dwalker@...sta.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	xfs@....sgi.com, linux-kernel@...r.kernel.org, matthew@....cx
Subject: Re: [PATCH 4/6] Replace inode flush semaphore with a completion

On Fri, 2008-06-27 at 18:44 +1000, Dave Chinner wrote:
> Use the new completion flush code to implement the inode
> flush lock. Removes one of the final users of semaphores
> in the XFS code base.
> 
> Version 2:
> o make flock functions static inlines
> o use new completion interfaces

I was looking over this lock in XFS .. The iflock/ifunlock seem to be
very much like mutexes in most of the calling locations. Where the lock
happens at the start, and the unlock happens when the function calls
bottom out. It seems like a better way to go would be to change from,

xfs_ilock(uqp, XFS_ILOCK_EXCL);
xfs_iflock(uqp);
error = xfs_iflush(uqp, XFS_IFLUSH_SYNC);

Where xfs_iflush eventually does the unlock to,

xfs_ilock(uqp, XFS_ILOCK_EXCL);
xfs_iflock(uqp);
error = xfs_iflush(uqp, XFS_IFLUSH_SYNC);
xfs_ifunlock(uqp);

And remove the unlocking from inside xfs_iflush(). Then use a flag to
indicate that the flush is in progress, and a
completion/wait_for_completion when another thread needs to wait on the
flush to complete if it's an async flush.

That seems vastly more complex than your current patch, but I think it
will be much cleaner ..

Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ