lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <533C69A9.8090804@hp.com>
Date:	Wed, 02 Apr 2014 13:48:57 -0600
From:	Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@...com>
To:	Jan Kara <jack@...e.cz>, T Makphaibulchoke <tmac@...com>
CC:	tytso@....edu, adilger.kernel@...ger.ca,
	linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
	aswin@...com
Subject: Re: [PATCH v2] fs/ext4: increase parallelism in updating ext4 orphan
 list

On 04/02/2014 11:41 AM, Jan Kara wrote:
>   Thanks for the patches and measurements! So I agree we contend a lot on
> orphan list changes in ext4. But what you do seems to be unnecessarily
> complicated and somewhat hiding the real substance of the patch. If I
> understand your patch correctly, all it does is that it does the
> preliminary work (ext4_reserve_inode_write(),
> ext4_journal_get_write_access()) without the global orphan mutex (under the
> hashed mutex).
> 

Thanks Jan for the comments.  Yes, doing some of the preliminary work with grabbing the global mutex is part of the patch's strategy.

> However orphan operations on a single inode are already serialized by
> i_mutex so there's no need to introduce any new hashed lock. Just add
> assertion mutex_locked(&inode->i_mutex) to ext4_orphan_add() and
> ext4_orphan_del() - you might need to lock i_mutex around the code in
> fs/ext4/migrate.c and in ext4_tmpfile() but that should be fine.
> 

As you pointed out, sounds like there may still be some code path that did not acquire the i_mutex.  It probably would be better to acquire the i_mutex if it is not already acquired.

> Also I'm somewhat failing to see what the spinlock s_orphan_lock brings us.
> I'd guess that the mutex could still protect also the in-memory list and we
> have to grab it in all the relevant cases anyway (in some rare cases we
> could avoid taking the mutex and spinlock would be enough but these
> shouldn't be performance relevant). Please correct me if I'm wrong here, I
> didn't look at the code for that long.
> 

Yes, you are correct.  In the error or previous error case, we only need to update the in memory orphan list, which spinlock seems to be a better mechanism for serialization. Using a separate spinlock would also allow simultanoue operations of both the error and non-error cases.  As you said, if this is a very rare case, it should not make much different.  I'll rerun and ompare the benchmark results using a single mutex.

> Finally (and I somewhat miss this in your patch), I'd think we might need
> to use list_empty_careful() when checking inode's orphan list without
> global orphan list lock.
> 
> 								Honza
> 

Since we already serialize orphan operation with a single inode, the only race condition is an orphan operation on other inodes moving the inode within the orphan list.  In this case head->next should not equal head.  But yes, it is probably safer to use the list_empty_careful().

Thanks,
Mak.

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ