lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20091028091321.b136d9d9.kamezawa.hiroyu@jp.fujitsu.com> Date: Wed, 28 Oct 2009 09:13:21 +0900 From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> To: vedran.furac@...il.com Cc: Minchan Kim <minchan.kim@...il.com>, KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, "hugh.dickins@...cali.co.uk" <hugh.dickins@...cali.co.uk>, "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, rientjes@...gle.com Subject: Re: [RFC][PATCH] oom_kill: avoid depends on total_vm and use real RSS/swap value for oom_score (Re: Memory overcommit On Tue, 27 Oct 2009 18:41:22 +0100 Vedran Furač <vedran.furac@...il.com> wrote: > KAMEZAWA Hiroyuki wrote: > > > On Tue, 27 Oct 2009 15:55:26 +0900 > > Minchan Kim <minchan.kim@...il.com> wrote: > > > >>>> Hmm. > >>>> I wonder why we consider VM size for OOM kiling. > >>>> How about RSS size? > >>>> > >>> Maybe the current code assumes "Tons of swap have been generated, already" if > >>> oom-kill is invoked. Then, just using mm->anon_rss will not be correct. > >>> > >>> Hm, should we count # of swap entries reference from mm ?.... > >> In Vedran case, he didn't use swap. So, Only considering vm is the problem. > >> I think it would be better to consider both RSS + # of swap entries as > >> Kosaki mentioned. > >> > > Then, maybe this kind of patch is necessary. > > This is on 2.6.31...then I may have to rebase this to mmotom. > > Added more CCs. > > > > Vedran, I'm glad if you can test this patch. > > Thanks for the patch! I'll test it during this week a report after that. > > > Instead of total_vm, we should use anon/file/swap usage of a process, I think. > > This patch adds mm->swap_usage and calculate oom_score based on > > anon_rss + file_rss + swap_usage. > > Isn't file_rss shared between processes? Sorry, I'm newbie. :) > It's shared. But in typical case, file_rss will very small at OOM. > % pmap $(pidof test) > 29049: ./test > 0000000000400000 4K r-x-- /home/vedranf/dev/tmp/test > 0000000000600000 4K rw--- /home/vedranf/dev/tmp/test > 00002ba362a80000 116K r-x-- /lib/ld-2.10.1.so > 00002ba362a9d000 12K rw--- [ anon ] > 00002ba362c9c000 4K r---- /lib/ld-2.10.1.so > 00002ba362c9d000 4K rw--- /lib/ld-2.10.1.so > 00002ba362c9e000 1320K r-x-- /lib/libc-2.10.1.so > 00002ba362de8000 2044K ----- /lib/libc-2.10.1.so > 00002ba362fe7000 16K r---- /lib/libc-2.10.1.so > 00002ba362feb000 4K rw--- /lib/libc-2.10.1.so > 00002ba362fec000 1024028K rw--- [ anon ] // <-- This > 00007ffff4618000 84K rw--- [ stack ] > 00007ffff47b7000 4K r-x-- [ anon ] > ffffffffff600000 4K r-x-- [ anon ] > total 1027648K > > I would just look at anon if that's OK (or possible). > > > Considering usual applications, this will be much better information than > > total_vm. > > Agreed. > > > score PID name > > 4033 3176 gnome-panel > > 4077 3113 xinit > > 4526 3190 python > > 4820 3161 gnome-settings- > > 4989 3289 gnome-terminal > > 7105 3271 tomboy > > 8427 3177 nautilus > > 17549 3140 gnome-session > > 128501 3299 bash > > 256106 3383 mmap > > > > This order is not bad, I think. > > Yes, this looks much better now. Bash is only having somewhat strangely > high score. > It gets half score of mmap....If mmap goes, bash's score will goes down dramatically. I'll read other's comments and tweak this patch more. Thanks, -Kame > Regards, > > Vedran > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@...ck.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a> > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists