[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160817105711.GA6656@quack2.suse.cz>
Date: Wed, 17 Aug 2016 12:57:11 +0200
From: Jan Kara <jack@...e.cz>
To: arekm@...en.pl
Cc: Michal Hocko <mhocko@...nel.org>, linux-ext4@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH] mm, oom: report compaction/migration stats for higher
order requests
On Tue 16-08-16 13:18:25, Arkadiusz Miskiewicz wrote:
> On Monday 15 of August 2016, Michal Hocko wrote:
> > [Fixing up linux-mm]
> >
> > Ups I had a c&p error in the previous patch. Here is an updated patch.
>
>
> Going to apply this patch now and report again. I mean time what I have is a
>
> while (true); do echo "XX date"; date; echo "XX SLAB"; cat /proc/slabinfo ;
> echo "XX VMSTAT"; cat /proc/vmstat ; echo "XX free"; free; echo "XX DMESG";
> dmesg -T | tail -n 50; /bin/sleep 60;done 2>&1 | tee log
>
> loop gathering some data while few OOM conditions happened.
>
> I was doing "rm -rf copyX; cp -al original copyX" 10x in parallel.
>
> https://ixion.pld-linux.org/~arekm/p2/ext4/log-20160816.txt
Just one more debug idea to add on top of what Michal said: Can you enable
mm_shrink_slab_start and mm_shrink_slab_end tracepoints (via
/sys/kernel/debug/tracing/events/vmscan/mm_shrink_slab_{start,end}/enable)
and gather output from /sys/kernel/debug/tracing/trace_pipe while the copy
is running?
Because your slab caches seem to contain a lot of dentries as well (even
more than inodes in terms of numbers) so it may be that OOM is declared too
early before slab shrinkers can actually catch up...
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists