lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Feb 2022 19:07:15 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Hugh Dickins <hughd@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Michal Hocko <mhocko@...e.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Matthew Wilcox <willy@...radead.org>,
        David Hildenbrand <david@...hat.com>,
        Alistair Popple <apopple@...dia.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Rik van Riel <riel@...riel.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Yu Zhao <yuzhao@...gle.com>, Greg Thelen <gthelen@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 06/13] mm/munlock: maintain page->mlock_count while
 unevictable

On 2/6/22 22:40, Hugh Dickins wrote:
> @@ -72,19 +91,40 @@ void mlock_page(struct page *page)
>   */
>  void munlock_page(struct page *page)
>  {
> +	struct lruvec *lruvec;
> +	int nr_pages = thp_nr_pages(page);
> +
>  	VM_BUG_ON_PAGE(PageTail(page), page);
>  
> +	lock_page_memcg(page);

Hm this (and unlock_page_memcg() below) didn't catch my attention until I
see patch 10/13 removes it again. It also AFAICS wasn't present in the code
removed by patch 1. Am I missing something or it wasn't necessary to add it
in the first place?

> +	lruvec = folio_lruvec_lock_irq(page_folio(page));
> +	if (PageLRU(page) && PageUnevictable(page)) {
> +		/* Then mlock_count is maintained, but might undercount */
> +		if (page->mlock_count)
> +			page->mlock_count--;
> +		if (page->mlock_count)
> +			goto out;
> +	}
> +	/* else assume that was the last mlock: reclaim will fix it if not */
> +
>  	if (TestClearPageMlocked(page)) {
> -		int nr_pages = thp_nr_pages(page);
> -
> -		mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> -		if (!isolate_lru_page(page)) {
> -			putback_lru_page(page);
> -			count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
> -		} else if (PageUnevictable(page)) {
> -			count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
> -		}
> +		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> +		if (PageLRU(page) || !PageUnevictable(page))
> +			__count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
> +		else
> +			__count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
> +	}
> +
> +	/* page_evictable() has to be checked *after* clearing Mlocked */
> +	if (PageLRU(page) && PageUnevictable(page) && page_evictable(page)) {
> +		del_page_from_lru_list(page, lruvec);
> +		ClearPageUnevictable(page);
> +		add_page_to_lru_list(page, lruvec);
> +		__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
>  	}
> +out:
> +	unlock_page_lruvec_irq(lruvec);
> +	unlock_page_memcg(page);
>  }
>  
>  /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ