lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28c262360912101712g1c78396die769fe6a5cc3df82@mail.gmail.com>
Date:	Fri, 11 Dec 2009 10:12:57 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	cl@...ux-foundation.org,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	mingo@...e.hu
Subject: Re: [RFC mm][PATCH 5/5] counting lowmem rss per mm

On Thu, Dec 10, 2009 at 5:01 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@...fujitsu.com> wrote:
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> Some case of OOM-Kill is caused by memory shortage in lowmem area. For example,
> NORMAL_ZONE is exhausted on x86-32/HIGHMEM kernel.
>
> Now, oom-killer doesn't have no lowmem usage information of processes and
> selects victim processes based on global memory usage information.
> In bad case, this can cause chains of kills of innocent processes without
> progress, oom-serial-killer.
>
> For making oom-killer lowmem aware, this patch adds counters for accounting
> lowmem usage per process. (patches for oom-killer is not included in this.)
>
> Adding counter is easy but one of concern is the cost for new counter.
>
> Following is the test result of micro-benchmark of parallel page faults.
> Bigger page fault number indicates better scalability.
> (measured under USE_SPLIT_PTLOCKS environemt)
> [Before lowmem counter]
>  Performance counter stats for './multi-fault 2' (5 runs):
>
>       46997471  page-faults                ( +-   0.720% )
>     1004100076  cache-references           ( +-   0.734% )
>      180959964  cache-misses               ( +-   0.374% )
>  29263437363580464  bus-cycles                 ( +-   0.002% )
>
>   60.003315683  seconds time elapsed   ( +-   0.004% )
>
> 3.85 miss/faults
> [After lowmem counter]
>  Performance counter stats for './multi-fault 2' (5 runs):
>
>       45976947  page-faults                ( +-   0.405% )
>      992296954  cache-references           ( +-   0.860% )
>      183961537  cache-misses               ( +-   0.473% )
>  29261902069414016  bus-cycles                 ( +-   0.002% )
>
>   60.001403261  seconds time elapsed   ( +-   0.000% )
>
> 4.0 miss/faults.
>
> Then, small cost is added. But I think this is within reasonable
> range.
>
> If you have good idea for improve this number, it's welcome.
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Reviewed-by: Minchan Kim <minchan.kim@...il.com>

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ