lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33f2f836-98a0-b593-1d43-b289d645db5@google.com>
Date:   Wed, 30 Nov 2022 16:13:23 -0800 (PST)
From:   Hugh Dickins <hughd@...gle.com>
To:     Johannes Weiner <hannes@...xchg.org>
cc:     Hugh Dickins <hughd@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>, linux-mm@...ck.org,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: remove lock_page_memcg() from rmap

On Wed, 30 Nov 2022, Johannes Weiner wrote:
> 
> Hm, I think the below should work for swap pages. Do you see anything
> obviously wrong with it, or scenarios I haven't considered?
> 

I think you're overcomplicating it, with the __swap_count(ent) business,
and consequent unnecessarily detailed comments on the serialization.

Page/folio lock prevents a !page_mapped(page) becoming a page_mapped(page),
whether it's in swap cache or in file cache; it does not stop the sharing
count going further up, or down even to 0, but we just don't need to worry
about that sharing count - the MC_TARGET_PAGE case does not reject pages
with mapcount > 1, so why complicate the swap or file case in that way?

(Yes, it can be argued that all such sharing should be rejected; but we
didn't come here to argue improvements to memcg charge moving semantics:
just to minimize its effect on rmap, before it is fully deprecated.)

Or am I missing the point of why you add that complication?

> @@ -5637,6 +5645,46 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma,

Don't forget to trylock the page in the device_private case before this.

>          * we call find_get_page() with swapper_space directly.
>          */
>         page = find_get_page(swap_address_space(ent), swp_offset(ent));
> +
> +       /*
> +        * Don't move shared charges. This isn't just for saner move
> +        * semantics, it also ensures that page_mapped() is stable for
> +        * the accounting in mem_cgroup_mapcount().

mem_cgroup_mapcount()??

> +        *
> +        * We have to serialize against the following paths: fork
> +        * (which may copy a page map or a swap pte), fault (which may
> +        * change a swap pte into a page map), unmap (which may cause
> +        * a page map or a swap pte to disappear), and reclaim (which
> +        * may change a page map into a swap pte).
> +        *
> +        * - Without swapcache, we only want to move the charge if
> +        *   there are no other swap ptes. With the pte lock, the
> +        *   swapcount is stable against all of the above scenarios
> +        *   when it's 1 (our pte), which is the case we care about.
> +        *
> +        * - When there is a page in swapcache, we only want to move
> +        *   charges when neither the page nor the swap entry are
> +        *   mapped elsewhere. The pte lock prevents our pte from
> +        *   being forked or unmapped. The page lock will stop faults
> +        *   against, and reclaim of, the swapcache page. So if the
> +        *   page isn't mapped, and the swap count is 1 (our pte), the
> +        *   test results are stable and the charge is exclusive.
> +        */
> +       if (!page && __swap_count(ent) != 1)
> +               return NULL;
> +
> +       if (page) {
> +               if (!trylock_page(page)) {
> +                       put_page(page);
> +                       return NULL;
> +               }
> +               if (page_mapped(page) || __swap_count(ent) != 1) {
> +                       unlock_page(page);
> +                       put_page(page);
> +                       return NULL;
> +               }
> +       }
> +
>         entry->val = ent.val;
>  
>         return page;

Looks right, without the __swap_count() additions and swap count comments.

And similar code in mc_handle_file_pte() - or are you saying that only
swap should be handled this way?  I would disagree.

And matching trylock in mc_handle_present_pte() (and get_mctgt_type_thp()),
instead of in mem_cgroup_move_account().

I haven't checked to see where the page then needs to be unlocked,
probably some new places.

And I don't know what will be best for the preliminary precharge pass:
doesn't really want the page lock at all, but it may be unnecessary
complication to avoid taking it then unlocking it in that pass.

Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ