lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 May 2012 08:50:51 +0200
From:	Michal Hocko <mhocko@...e.cz>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Johannes Weiner <hannes@...xchg.org>,
	Mel Gorman <mel@....ul.ie>, Minchan Kim <minchan@...nel.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH] mm: consider all swapped back pages in used-once logic

On Thu 17-05-12 13:23:24, Andrew Morton wrote:
> On Thu, 17 May 2012 14:10:49 +0200
> Michal Hocko <mhocko@...e.cz> wrote:
> 
> > > > This patch fixes a regression introduced by this commit for heavy shmem
> > > 
> > > A performance regression, specifically.
> > > 
> > > Are you able to quantify it?
> > 
> > The customer's workload is shmem backed database (80% of RAM) and
> > they are measuring transactions/s with an IO in the background (20%).
> > Transactions touch more or less random rows in the table.
> > The rate goes down drastically when we start swapping out memory.
> > 
> > Numbers are more descriptive (without the patch is 100%, with 5
> > representative runs)
> > Average rate	315.83%
> > Best rate	131.76%
> > Worst rate	641.25%
> > 
> > Standard deviation (calibrated to average) is ~4% while without the
> > patch we are at 62.82%. 
> > The big variance without the patch is caused by the excessive swapping
> > which doesn't occur with the patch applied.
> > 
> > * Worst run (100%) compared to a random run with the patch
> > pgpgin	pswpin	pswpout	pgmajfault
> > 1.58%	0.00%	0.01%	0.22%
> > 
> > Average size of the LRU lists:
> > nr_inactive_anon nr_active_anon nr_inactive_file nr_active_file
> > 52.91%           7234.72%       249.39%          126.64%
> > 
> > * Best run
> > pgpgin	pswpin	pswpout	pgmajfault
> > 3.37%	0.00%	0.11%	0.39%
> > 
> > nr_inactive_anon nr_active_anon nr_inactive_file nr_active_file
> > 49.85%           3868.74%       175.03%          121.27%
> 
> I turned the above into this soundbite:
> 
> : The customer's workload is shmem backed database (80% of RAM) and they are
> : measuring transactions/s with an IO in the background (20%).  Transactions
> : touch more or less random rows in the table.  Total runtime was
> : approximately tripled by commit 64574746 and this patch restores the
> : previous throughput levels.
> 
> Was that truthful?

Total runtime was same for all the runs. It is the number of executed
transactions that was measured. I guess that what you wrote should be
more or less equivalent but it's is not what I have numbers for.
How about:
"
Total number of transactions went down 3 times (in the worst case)
because of commit 64574746. This patch restores the previous numbers.
"

Thanks
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ