lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87eg3tclbd.fsf@x220.int.ebiederm.org>
Date:   Thu, 06 Oct 2016 14:46:30 -0500
From:   ebiederm@...ssion.com (Eric W. Biederman)
To:     Andrei Vagin <avagin@...nvz.org>
Cc:     Alexander Viro <viro@...iv.linux.org.uk>,
        containers@...ts.linux-foundation.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mount: dont execute propagate_umount() many times for same mounts

Andrei Vagin <avagin@...nvz.org> writes:

> The reason of this optimization is that umount() can hold namespace_sem
> for a long time, this semaphore is global, so it affects all users.
> Recently Eric W. Biederman added a per mount namespace limit on the
> number of mounts. The default number of mounts allowed per mount
> namespace at 100,000. Currently this value is allowed to construct a tree
> which requires hours to be umounted.

I am going to take a hard look at this as this problem sounds very
unfortunate.  My memory of going through this code before strongly
suggests that changing the last list_for_each_entry to
list_for_each_entry_reverse is going to impact the correctness of this
change.

The order of traversal is important if there are several things mounted
one on the other that are all being unmounted.

Now perhaps your other changes have addressed that but I haven't looked
closely enough to see that yet.


> @@ -454,7 +473,7 @@ int propagate_umount(struct list_head *list)
>  	list_for_each_entry_reverse(mnt, list, mnt_list)
>  		mark_umount_candidates(mnt);
>  
> -	list_for_each_entry(mnt, list, mnt_list)
> +	list_for_each_entry_reverse(mnt, list, mnt_list)
>  		__propagate_umount(mnt);
>  	return 0;
>  }

Eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ