[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1411344191-2842-1-git-send-email-minchan@kernel.org>
Date: Mon, 22 Sep 2014 09:03:06 +0900
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
Jerome Marchand <jmarchan@...hat.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Dan Streetman <ddstreet@...e.org>,
Nitin Gupta <ngupta@...are.org>,
Luigi Semenzato <semenzato@...gle.com>, juno.choi@....com,
Minchan Kim <minchan@...nel.org>
Subject: [PATCH v1 0/5] stop anon reclaim when zram is full
For zram-swap, there is size gap between virtual disksize
and available physical memory size for zram so that VM
can try to reclaim anonymous pages even though zram is full.
It makes system alomost hang(ie, unresponsible) easily in
my kernel build test(ie, 1G DRAM, CPU 12, 4G zram swap,
50M zram limit). VM should have killed someone.
This patch adds new hint SWAP_FULL so VM can ask fullness
to zram and if it founds zram is full, VM doesn't reclaim
anonymous pages until zram-swap gets new free space.
With this patch, I see OOM when zram-swap is full instead of
hang with no response.
Minchan Kim (5):
zram: generalize swap_slot_free_notify
mm: add full variable in swap_info_struct
mm: VM can be aware of zram fullness
zram: add swap full hint
zram: add fullness knob to control swap full
Documentation/ABI/testing/sysfs-block-zram | 10 +++
Documentation/filesystems/Locking | 4 +-
drivers/block/zram/zram_drv.c | 114 +++++++++++++++++++++++++++--
drivers/block/zram/zram_drv.h | 2 +
include/linux/blkdev.h | 8 +-
include/linux/swap.h | 1 +
mm/page_io.c | 6 +-
mm/swapfile.c | 77 ++++++++++++++-----
8 files changed, 189 insertions(+), 33 deletions(-)
--
2.0.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists