lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170111215242.53cb8fab64beec599dcea847@gmail.com>
Date:   Wed, 11 Jan 2017 21:52:42 +0100
From:   Vitaly Wool <vitalywool@...il.com>
To:     Vitaly Wool <vitalywool@...il.com>
Cc:     Dan Streetman <ddstreet@...e.org>, Linux-MM <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH/RESEND v2 3/5] z3fold: extend compaction function

On Wed, 11 Jan 2017 17:43:13 +0100
Vitaly Wool <vitalywool@...il.com> wrote:

> On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman <ddstreet@...e.org> wrote:
> > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool <vitalywool@...il.com> wrote:
> >> z3fold_compact_page() currently only handles the situation when
> >> there's a single middle chunk within the z3fold page. However it
> >> may be worth it to move middle chunk closer to either first or
> >> last chunk, whichever is there, if the gap between them is big
> >> enough.
> >>
> >> This patch adds the relevant code, using BIG_CHUNK_GAP define as
> >> a threshold for middle chunk to be worth moving.
> >>
> >> Signed-off-by: Vitaly Wool <vitalywool@...il.com>
> >> ---
> >>  mm/z3fold.c | 26 +++++++++++++++++++++++++-
> >>  1 file changed, 25 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/mm/z3fold.c b/mm/z3fold.c
> >> index 98ab01f..fca3310 100644
> >> --- a/mm/z3fold.c
> >> +++ b/mm/z3fold.c
> >> @@ -268,6 +268,7 @@ static inline void *mchunk_memmove(struct z3fold_header *zhdr,
> >>                        zhdr->middle_chunks << CHUNK_SHIFT);
> >>  }
> >>
> >> +#define BIG_CHUNK_GAP  3
> >>  /* Has to be called with lock held */
> >>  static int z3fold_compact_page(struct z3fold_header *zhdr)
> >>  {
> >> @@ -286,8 +287,31 @@ static int z3fold_compact_page(struct z3fold_header *zhdr)
> >>                 zhdr->middle_chunks = 0;
> >>                 zhdr->start_middle = 0;
> >>                 zhdr->first_num++;
> >> +               return 1;
> >>         }
> >> -       return 1;
> >> +
> >> +       /*
> >> +        * moving data is expensive, so let's only do that if
> >> +        * there's substantial gain (at least BIG_CHUNK_GAP chunks)
> >> +        */
> >> +       if (zhdr->first_chunks != 0 && zhdr->last_chunks == 0 &&
> >> +           zhdr->start_middle - (zhdr->first_chunks + ZHDR_CHUNKS) >=
> >> +                       BIG_CHUNK_GAP) {
> >> +               mchunk_memmove(zhdr, zhdr->first_chunks + 1);
> >> +               zhdr->start_middle = zhdr->first_chunks + 1;
> >
> > this should be first_chunks + ZHDR_CHUNKS, not + 1.
> >
> >> +               return 1;
> >> +       } else if (zhdr->last_chunks != 0 && zhdr->first_chunks == 0 &&
> >> +                  TOTAL_CHUNKS - (zhdr->last_chunks + zhdr->start_middle
> >> +                                       + zhdr->middle_chunks) >=
> >> +                       BIG_CHUNK_GAP) {
> >> +               unsigned short new_start = NCHUNKS - zhdr->last_chunks -
> >
> > this should be TOTAL_CHUNKS, not NCHUNKS.
> 
> Right :/

So here we go:


z3fold_compact_page() currently only handles the situation when
there's a single middle chunk within the z3fold page. However it
may be worth it to move middle chunk closer to either first or
last chunk, whichever is there, if the gap between them is big
enough.

This patch adds the relevant code, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.

Signed-off-by: Vitaly Wool <vitalywool@...il.com>
---
 mm/z3fold.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..fca3310 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -268,6 +268,7 @@ static inline void *mchunk_memmove(struct z3fold_header *zhdr,
 		       zhdr->middle_chunks << CHUNK_SHIFT);
 }
 
+#define BIG_CHUNK_GAP	3
 /* Has to be called with lock held */
 static int z3fold_compact_page(struct z3fold_header *zhdr)
 {
@@ -286,8 +287,31 @@ static int z3fold_compact_page(struct z3fold_header *zhdr)
 		zhdr->middle_chunks = 0;
 		zhdr->start_middle = 0;
 		zhdr->first_num++;
+		return 1;
 	}
-	return 1;
+
+	/*
+	 * moving data is expensive, so let's only do that if
+	 * there's substantial gain (at least BIG_CHUNK_GAP chunks)
+	 */
+	if (zhdr->first_chunks != 0 && zhdr->last_chunks == 0 &&
+	    zhdr->start_middle - (zhdr->first_chunks + ZHDR_CHUNKS) >=
+			BIG_CHUNK_GAP) {
+		mchunk_memmove(zhdr, zhdr->first_chunks + ZHDR_CHUNKS);
+		zhdr->start_middle = zhdr->first_chunks + ZHDR_CHUNKS;
+		return 1;
+	} else if (zhdr->last_chunks != 0 && zhdr->first_chunks == 0 &&
+		   TOTAL_CHUNKS - (zhdr->last_chunks + zhdr->start_middle
+					+ zhdr->middle_chunks) >=
+			BIG_CHUNK_GAP) {
+		unsigned short new_start = TOTAL_CHUNKS - zhdr->last_chunks -
+			zhdr->middle_chunks;
+		mchunk_memmove(zhdr, new_start);
+		zhdr->start_middle = new_start;
+		return 1;
+	}
+
+	return 0;
 }
 
 /**
-- 
2.4.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ