lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 May 2020 14:58:17 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Shakeel Butt <shakeelb@...gle.com>
Cc:     Mel Gorman <mgorman@...e.de>, Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...nel.org>,
        Minchan Kim <minchan@...nel.org>,
        Rik van Riel <riel@...riel.com>,
        Linux MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: fix LRU balancing effect of new transparent huge
 pages

On Mon, 11 May 2020 14:38:23 -0700 Shakeel Butt <shakeelb@...gle.com> wrote:

> On Mon, May 11, 2020 at 2:11 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
> >
> > On Sat,  9 May 2020 07:19:46 -0700 Shakeel Butt <shakeelb@...gle.com> wrote:
> >
> > > Currently, THP are counted as single pages until they are split right
> > > before being swapped out. However, at that point the VM is already in
> > > the middle of reclaim, and adjusting the LRU balance then is useless.
> > >
> > > Always account THP by the number of basepages, and remove the fixup
> > > from the splitting path.
> >
> > Confused.  What kernel is this applicable to?
> 
> It is still applicable to the latest Linux kernel.

The patch has

> @@ -288,7 +288,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
>  
>  		__count_vm_events(PGACTIVATE, nr_pages);
>  		__count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages);
> -		update_page_reclaim_stat(lruvec, file, 1);
> +		update_page_reclaim_stat(lruvec, file, 1, nr_pages);
>  	}
>  }

but current mainline is quite different:

static void __activate_page(struct page *page, struct lruvec *lruvec,
			    void *arg)
{
	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
		int file = page_is_file_lru(page);
		int lru = page_lru_base_type(page);

		del_page_from_lru_list(page, lruvec, lru);
		SetPageActive(page);
		lru += LRU_ACTIVE;
		add_page_to_lru_list(page, lruvec, lru);
		trace_mm_lru_activate(page);

		__count_vm_event(PGACTIVATE);
		update_page_reclaim_stat(lruvec, file, 1);
	}
}

q:/usr/src/linux-5.7-rc5> patch -p1 --dry-run < ~/x.txt
checking file mm/swap.c
Hunk #2 FAILED at 288.
Hunk #3 FAILED at 546.
Hunk #4 FAILED at 564.
Hunk #5 FAILED at 590.
Hunk #6 succeeded at 890 (offset -9 lines).
Hunk #7 succeeded at 915 (offset -9 lines).
Hunk #8 succeeded at 958 with fuzz 2 (offset -10 lines).
4 out of 8 hunks FAILED

Powered by blists - more mailing lists