lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:   Mon, 6 Aug 2018 20:23:02 -0700 (PDT)
From:   Hugh Dickins <hughd@...gle.com>
To:     "zhaowuyun@...gtech.com" <zhaowuyun@...gtech.com>
cc:     Hugh Dickins <hughd@...gle.com>, akpm <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>,
        mgorman <mgorman@...hsingularity.net>,
        minchan <minchan@...nel.org>, vinmenon <vinmenon@...eaurora.org>,
        hannes <hannes@...xchg.org>,
        "hillf.zj" <hillf.zj@...baba-inc.com>,
        linux-mm <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: Re: [PATCH] [PATCH] mm: disable preemption before
 swapcache_free

On Tue, 7 Aug 2018, zhaowuyun@...gtech.com wrote:
> 
> Thanks for affirming the modification of disabling preemption and 
> pointing out the incompleteness, delete_from_swap_cache() needs the same protection.
> I'm curious about that why don't put swapcache_free(swap) under protection of mapping->tree_lock ??

That would violate the long-established lock ordering (see not-always-
kept-up-to-date comments at the head of mm/rmap.c). In particular,
swap_lock (and its more recent descendants, such as swap_info->lock)
can be held with interrupts enabled, whereas taking tree_lock (later
called i_pages lock) involves disabling interrupts. So: there would
be quite a lot of modifications required to do swapcache_free(swap)
under mapping->tree_lock.

Generally easier would be to take tree_lock under swap lock: that fits
the establishd lock ordering, and is already done in just a few places
- or am I thinking of free_swap_and_cache() in the old days before
find_get_page() did lockless lookup? But you didn't suggest that way,
because it's more awkward in the __remove_mapping() case: I expect
that could be worked around with an initial PageSwapCache check,
taking swap locks there first (not inside swapcache_free()) -
__remove_mapping()'s BUG_ON(!PageLocked) implies that won't be racy.

But either way round, why? What would be the advantage in doing so?
A more conventional nesting of locks, easier to describe and understand,
yes. But from a performance point of view, thinking of lock contention,
nothing but disadvantage. And don't forget the get_swap_page() end:
there it would be harder to deal with both locks together (at least
in the shmem case).

Hugh

Powered by blists - more mailing lists