lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Jan 2017 13:47:20 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     zhouxianrong <zhouxianrong@...wei.com>
CC:     Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        <linux-kernel@...r.kernel.org>, <Mi.Sophia.Wang@...wei.com>,
        <zhouxiyu@...wei.com>, <weidu.du@...wei.com>,
        <zhangshiming5@...wei.com>, <won.ho.park@...wei.com>
Subject: Re: [PATCH v3] zram: extend zero pages to same element pages

Hello,

When I look at first patch, I wanted to use increment loop but didn't
tell to you because that small piece of code is no harmful for readbility
to me so I want to keep author's code rather than pointing the trivial
which is just matter of preference out.

Rather than readiblity, I suspect it might hurt performance and talked
with Namhyung but we cannot find anything decremental loop is bad
compared to incremental. Rather than, many articles have been said
decrement loop is faster like zhouxianrong's mentiond although I don't
think it makes marginal difference.

Joonsoo, why do you think incremental is faster?
zhouxianrong, why do you think decrement loops makes cache problem?

I'm okay either way. Just want to know why you guys think about it.

Thanks.

On Mon, Jan 23, 2017 at 11:46:27AM +0800, zhouxianrong wrote:
> the purpose for reverse iteration is that i want compiler
> compose decreasing instruction and compared instruction with zero
> into one instruction which change cpu condition state.
> 
> yes, this maybe cause cache problem so need to be reviewed. thanks
> 
> On 2017/1/23 11:02, Joonsoo Kim wrote:
> >On Mon, Jan 23, 2017 at 10:55:23AM +0900, Minchan Kim wrote:
> >>From: zhouxianrong <zhouxianrong@...wei.com>
> >>
> >>the idea is that without doing more calculations we extend zero pages
> >>to same element pages for zram. zero page is special case of
> >>same element page with zero element.
> >>
> >>1. the test is done under android 7.0
> >>2. startup too many applications circularly
> >>3. sample the zero pages, same pages (none-zero element)
> >>   and total pages in function page_zero_filled
> >>
> >>the result is listed as below:
> >>
> >>ZERO	SAME	TOTAL
> >>36214	17842	598196
> >>
> >>		ZERO/TOTAL	 SAME/TOTAL	  (ZERO+SAME)/TOTAL ZERO/SAME
> >>AVERAGE	0.060631909	 0.024990816  0.085622726		2.663825038
> >>STDEV	0.00674612	 0.005887625  0.009707034		2.115881328
> >>MAX		0.069698422	 0.030046087  0.094975336		7.56043956
> >>MIN		0.03959586	 0.007332205  0.056055193		1.928985507
> >>
> >>from above data, the benefit is about 2.5% and up to 3% of total
> >>swapout pages.
> >>
> >>the defect of the patch is that when we recovery a page from
> >>non-zero element the operations are low efficient for partial
> >>read.
> >>
> >>This patch extend zero_page to same_page so if there is any user to have
> >>monitored zero_pages, he will be surprised if the number is increased
> >>but it's no harmful, I believe.
> >>
> >>Signed-off-by: zhouxianrong <zhouxianrong@...wei.com>
> >>Signed-off-by: Minchan Kim <minchan@...nel.org>
> >>---
> >>I removed zram_set_page_partial because I think block layer works with
> >>IO size unit which would be aligned (unsigned long) at least, maybe
> >>SECTOR or PAGE size. Then, we can merge both zram_set_page and
> >>zram_set_page_partial.
> >>
> >> Documentation/blockdev/zram.txt |  6 ++--
> >> drivers/block/zram/zram_drv.c   | 77 ++++++++++++++++++++++++++++-------------
> >> drivers/block/zram/zram_drv.h   |  9 +++--
> >> 3 files changed, 61 insertions(+), 31 deletions(-)
> >>
> >>diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> >>index 1c0c08d..4fced8a 100644
> >>--- a/Documentation/blockdev/zram.txt
> >>+++ b/Documentation/blockdev/zram.txt
> >>@@ -201,8 +201,8 @@ File /sys/block/zram<id>/mm_stat
> >> The stat file represents device's mm statistics. It consists of a single
> >> line of text and contains the following stats separated by whitespace:
> >>  orig_data_size   uncompressed size of data stored in this disk.
> >>-                  This excludes zero-filled pages (zero_pages) since no
> >>-                  memory is allocated for them.
> >>+		  This excludes same-element-filled pages (same_pages) since
> >>+		  no memory is allocated for them.
> >>                   Unit: bytes
> >>  compr_data_size  compressed size of data stored in this disk
> >>  mem_used_total   the amount of memory allocated for this disk. This
> >>@@ -214,7 +214,7 @@ The stat file represents device's mm statistics. It consists of a single
> >>                   the compressed data
> >>  mem_used_max     the maximum amount of memory zram have consumed to
> >>                   store the data
> >>- zero_pages       the number of zero filled pages written to this disk.
> >>+ same_pages       the number of same element filled pages written to this disk.
> >>                   No memory is allocated for such pages.
> >>  pages_compacted  the number of pages freed during compaction
> >>
> >>diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> >>index 85737b6..46da1c4 100644
> >>--- a/drivers/block/zram/zram_drv.c
> >>+++ b/drivers/block/zram/zram_drv.c
> >>@@ -74,6 +74,17 @@ static void zram_clear_flag(struct zram_meta *meta, u32 index,
> >> 	meta->table[index].value &= ~BIT(flag);
> >> }
> >>
> >>+static inline void zram_set_element(struct zram_meta *meta, u32 index,
> >>+			unsigned long element)
> >>+{
> >>+	meta->table[index].element = element;
> >>+}
> >>+
> >>+static inline void zram_clear_element(struct zram_meta *meta, u32 index)
> >>+{
> >>+	meta->table[index].element = 0;
> >>+}
> >>+
> >> static size_t zram_get_obj_size(struct zram_meta *meta, u32 index)
> >> {
> >> 	return meta->table[index].value & (BIT(ZRAM_FLAG_SHIFT) - 1);
> >>@@ -146,31 +157,43 @@ static inline void update_used_max(struct zram *zram,
> >> 	} while (old_max != cur_max);
> >> }
> >>
> >>-static bool page_zero_filled(void *ptr)
> >>+static inline void zram_fill_page(char *ptr, unsigned long value)
> >>+{
> >>+	int i;
> >>+	unsigned long *page = (unsigned long *)ptr;
> >>+
> >>+	if (likely(value == 0)) {
> >>+		clear_page(ptr);
> >>+	} else {
> >>+		for (i = PAGE_SIZE / sizeof(unsigned long) - 1; i >= 0; i--)
> >>+			page[i] = value;
> >>+	}
> >>+}
> >
> >Hello,
> >
> >we don't need to iterate reversely. It makes code less understandable
> >and possibly it would have negative impact on the performance.
> >
> >>+
> >>+static bool page_same_filled(void *ptr, unsigned long *element)
> >> {
> >> 	unsigned int pos;
> >> 	unsigned long *page;
> >>
> >> 	page = (unsigned long *)ptr;
> >>
> >>-	for (pos = 0; pos != PAGE_SIZE / sizeof(*page); pos++) {
> >>-		if (page[pos])
> >>+	for (pos = PAGE_SIZE / sizeof(unsigned long) - 1; pos > 0; pos--) {
> >>+		if (page[pos] != page[pos - 1])
> >> 			return false;
> >> 	}
> >>
> >>+	*element = page[pos];
> >>+
> >
> >Ditto.
> >
> >Thanks.
> >
> >
> >.
> >
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ