lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200415091948.GH810380@xps-13>
Date:   Wed, 15 Apr 2020 11:19:48 +0200
From:   Andrea Righi <andrea.righi@...onical.com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Minchan Kim <minchan@...nel.org>,
        Anchal Agarwal <anchalag@...zon.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: swap: use fixed-size readahead during swapoff

On Wed, Apr 15, 2020 at 03:44:08PM +0800, Huang, Ying wrote:
> Andrea Righi <andrea.righi@...onical.com> writes:
> 
> >  mm/swapfile.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 9fd47e6f7a86..cb9eb517178d 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -1944,7 +1944,9 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> >  		vmf.pmd = pmd;
> >  		last_ra = atomic_read(&last_readahead_pages);
> >  		atomic_set(&swapin_readahead_hits, last_ra);
> 
> You need to remove the above 2 lines firstly.

Meh... too much enthusiasm, and I definitely need more coffee this
morning. Here's the right patch applied:

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5871a2aa86a5..8b38441b66fa 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1940,7 +1940,9 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 		vmf.vma = vma;
 		vmf.address = addr;
 		vmf.pmd = pmd;
-		page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf);
+		page = lookup_swap_cache(entry, vma, addr);
+		if (!page)
+			page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf);
 		if (!page) {
 			if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD)
 				goto try_next;

And following the right results:

r::swapin_nr_pages(unsigned long offset):unsigned long:$retval
	COUNT      EVENT
	1618       $retval = 1
	4960       $retval = 2
	41315      $retval = 4
	103521     $retval = 8

swapoff time: 12.19s

So, not as good as the fixed-size readahead, but it's definitely an
improvement, considering that the swapoff time is ~22s without this
patch applied.

I think this change can be a simple and reasonable compromise.

Thanks again and sorry for the noise,
-Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ