[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170125101517.GG32377@dhcp22.suse.cz>
Date: Wed, 25 Jan 2017 11:15:17 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Christoph Hellwig <hch@....de>
Cc: mgorman@...e.de, viro@...IV.linux.org.uk, linux-mm@...ck.org,
hannes@...xchg.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/2] mm, vmscan: account the number of isolated pages
per zone
[Let's add Christoph]
The below insane^Wstress test should exercise the OOM killer behavior.
On Sat 21-01-17 16:42:42, Tetsuo Handa wrote:
> Tetsuo Handa wrote:
> > And I think that there is a different problem if I tune a reproducer
> > like below (i.e. increased the buffer size to write()/fsync() from 4096).
> >
> > ----------
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <string.h>
> > #include <unistd.h>
> > #include <sys/types.h>
> > #include <sys/stat.h>
> > #include <fcntl.h>
> >
> > int main(int argc, char *argv[])
> > {
> > static char buffer[10485760] = { }; /* or 1048576 */
> > char *buf = NULL;
> > unsigned long size;
> > unsigned long i;
> > for (i = 0; i < 1024; i++) {
> > if (fork() == 0) {
> > int fd = open("/proc/self/oom_score_adj", O_WRONLY);
> > write(fd, "1000", 4);
> > close(fd);
> > sleep(1);
> > snprintf(buffer, sizeof(buffer), "/tmp/file.%u", getpid());
> > fd = open(buffer, O_WRONLY | O_CREAT | O_APPEND, 0600);
> > while (write(fd, buffer, sizeof(buffer)) == sizeof(buffer))
> > fsync(fd);
> > _exit(0);
> > }
> > }
> > for (size = 1048576; size < 512UL * (1 << 30); size <<= 1) {
> > char *cp = realloc(buf, size);
> > if (!cp) {
> > size >>= 1;
> > break;
> > }
> > buf = cp;
> > }
> > sleep(2);
> > /* Will cause OOM due to overcommit */
> > for (i = 0; i < size; i += 4096)
> > buf[i] = 0;
> > pause();
> > return 0;
> > }
> > ----------
> >
> > Above reproducer sometimes kills all OOM killable processes and the system
> > finally panics. I guess that somebody is abusing TIF_MEMDIE for needless
> > allocations to the level where GFP_ATOMIC allocations start failing.
[...]
> And I got flood of traces shown below. It seems to be consuming memory reserves
> until the size passed to write() request is stored to the page cache even after
> OOM-killed.
>
> Complete log is at http://I-love.SAKURA.ne.jp/tmp/serial-20170121.txt.xz .
> ----------------------------------------
> [ 202.306077] a.out(9789): TIF_MEMDIE allocation: order=0 mode=0x1c2004a(GFP_NOFS|__GFP_HIGHMEM|__GFP_HARDWALL|__GFP_MOVABLE|__GFP_WRITE)
> [ 202.309832] CPU: 0 PID: 9789 Comm: a.out Not tainted 4.10.0-rc4-next-20170120+ #492
> [ 202.312323] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/02/2015
> [ 202.315429] Call Trace:
> [ 202.316902] dump_stack+0x85/0xc9
> [ 202.318810] __alloc_pages_slowpath+0xa99/0xd7c
> [ 202.320697] ? node_dirty_ok+0xef/0x130
> [ 202.322454] __alloc_pages_nodemask+0x436/0x4d0
> [ 202.324506] alloc_pages_current+0x97/0x1b0
> [ 202.326397] __page_cache_alloc+0x15d/0x1a0 mm/filemap.c:728
> [ 202.328209] pagecache_get_page+0x5a/0x2b0 mm/filemap.c:1331
> [ 202.329989] grab_cache_page_write_begin+0x23/0x40 mm/filemap.c:2773
> [ 202.331905] iomap_write_begin+0x50/0xd0 fs/iomap.c:118
> [ 202.333641] iomap_write_actor+0xb5/0x1a0 fs/iomap.c:190
> [ 202.335377] ? iomap_write_end+0x80/0x80 fs/iomap.c:150
> [ 202.337090] iomap_apply+0xb3/0x130 fs/iomap.c:79
> [ 202.338721] iomap_file_buffered_write+0x68/0xa0 fs/iomap.c:243
> [ 202.340613] ? iomap_write_end+0x80/0x80
> [ 202.342471] xfs_file_buffered_aio_write+0x132/0x390 [xfs]
> [ 202.344501] ? remove_wait_queue+0x59/0x60
> [ 202.346261] xfs_file_write_iter+0x90/0x130 [xfs]
> [ 202.348082] __vfs_write+0xe5/0x140
> [ 202.349743] vfs_write+0xc7/0x1f0
> [ 202.351214] ? syscall_trace_enter+0x1d0/0x380
> [ 202.353155] SyS_write+0x58/0xc0
> [ 202.354628] do_syscall_64+0x6c/0x200
> [ 202.356100] entry_SYSCALL64_slow_path+0x25/0x25
> ----------------------------------------
>
> Do we need to allow access to memory reserves for this allocation?
> Or, should the caller check for SIGKILL rather than iterate the loop?
I think we are missing a check for fatal_signal_pending in
iomap_file_buffered_write. This means that an oom victim can consume the
full memory reserves. What do you think about the following? I haven't
tested this but it mimics generic_perform_write so I guess it should
work.
---
>From d56b54b708d403d1bf39fccb89750bab31c19032 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@...e.com>
Date: Wed, 25 Jan 2017 11:06:37 +0100
Subject: [PATCH] fs: break out of iomap_file_buffered_write on fatal signals
Tetsuo has noticed that an OOM stress test which performs large write
requests can cause the full memory reserves depletion. He has tracked
this down to the following path
__alloc_pages_nodemask+0x436/0x4d0
alloc_pages_current+0x97/0x1b0
__page_cache_alloc+0x15d/0x1a0 mm/filemap.c:728
pagecache_get_page+0x5a/0x2b0 mm/filemap.c:1331
grab_cache_page_write_begin+0x23/0x40 mm/filemap.c:2773
iomap_write_begin+0x50/0xd0 fs/iomap.c:118
iomap_write_actor+0xb5/0x1a0 fs/iomap.c:190
? iomap_write_end+0x80/0x80 fs/iomap.c:150
iomap_apply+0xb3/0x130 fs/iomap.c:79
iomap_file_buffered_write+0x68/0xa0 fs/iomap.c:243
? iomap_write_end+0x80/0x80
xfs_file_buffered_aio_write+0x132/0x390 [xfs]
? remove_wait_queue+0x59/0x60
xfs_file_write_iter+0x90/0x130 [xfs]
__vfs_write+0xe5/0x140
vfs_write+0xc7/0x1f0
? syscall_trace_enter+0x1d0/0x380
SyS_write+0x58/0xc0
do_syscall_64+0x6c/0x200
entry_SYSCALL64_slow_path+0x25/0x25
the oom victim has access to all memory reserves to make a forward
progress to exit easier. But iomap_file_buffered_write loops to complete
the full request. We need to check for fatal signals and back off with a
short write.
Fixes: 68a9f5e7007c ("xfs: implement iomap based buffered write path")
Cc: stable # 4.8+
Reported-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Signed-off-by: Michal Hocko <mhocko@...e.com>
---
fs/iomap.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/fs/iomap.c b/fs/iomap.c
index e57b90b5ff37..a22672387549 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -238,6 +238,10 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *iter,
loff_t pos = iocb->ki_pos, ret = 0, written = 0;
while (iov_iter_count(iter)) {
+ if (fatal_signal_pending(current)) {
+ ret = -EINTR;
+ break;
+ }
ret = iomap_apply(inode, pos, iov_iter_count(iter),
IOMAP_WRITE, ops, iter, iomap_write_actor);
if (ret <= 0)
--
2.11.0
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists