lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 19 Mar 2014 21:58:14 -0700
From:	Linus Torvalds <>
To:	Al Viro <>
Cc:	Max Kellermann <>,,
	Linux Kernel Mailing List <>
Subject: Re: [PATCH] fs/namespace: don't clobber while umounting [v2]

On Wed, Mar 19, 2014 at 9:21 PM, Al Viro <> wrote:
> Er...  I have, actually, right in the part you've snipped ;-)

Heh. That's what I get for just reading the patch, and skimming the explanation.

> I would prefer to deal with (1) by turning mnt_hash into hlist; the problem
> with that is __lookup_mnt_last().  That sucker is only called under
> mount_lock, so RCU issues do not play there, but it's there and it
> complicates things.  There might be a way to get rid of that thing for
> good, but that's more invasive than what I'd be happy with for backports.

Yeah. I see what you're saying. That said, if we expect the mnt_hash
queues to be short (and they really should be), that whole
__lookup_mnt_last() could just be

    struct mount *p, *result = NULL;

    hlist_for_each_entry(p, head, mnt_hash)
        if (&p->mnt_parent->mnt == mnt && p->mnt_mountpoint == dentry)
            result = p;

    return result;

which is certainly simple.

Sure, it always walks the whole list, but as far as I can tell the
callers aren't exactly performance-critical, and we're talking about a
hlist that should be just a couple of entries in size..

So if that's the _only_ thing holding back using hlists, I'd say we
should just do the above trivial conversion.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists