[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20161018122921.GF12092@dhcp22.suse.cz>
Date: Tue, 18 Oct 2016 14:29:22 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Dave Chinner <david@...morbit.com>
Cc: Mel Gorman <mgorman@...e.de>, Vlastimil Babka <vbabka@...e.cz>,
Joonsoo Kim <js1304@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm, compaction: allow compaction for GFP_NOFS
requests
On Tue 18-10-16 17:24:46, Dave Chinner wrote:
> On Mon, Oct 17, 2016 at 10:22:56AM +0200, Michal Hocko wrote:
> > On Mon 17-10-16 07:49:59, Dave Chinner wrote:
> > > On Thu, Oct 13, 2016 at 01:04:56PM +0200, Michal Hocko wrote:
> > > > On Thu 13-10-16 09:39:47, Michal Hocko wrote:
> > > > > On Thu 13-10-16 11:29:24, Dave Chinner wrote:
> > > > > > On Fri, Oct 07, 2016 at 03:18:14PM +0200, Michal Hocko wrote:
> > > > > [...]
> > > > > > > Unpatched kernel:
> > > > > > > # Version 3.3, 16 thread(s) starting at Fri Oct 7 09:55:05 2016
> > > > > > > # Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
> > > > > > > # Directories: Time based hash between directories across 10000 subdirectories with 180 seconds per subdirectory.
> > > > > > > # File names: 40 bytes long, (16 initial bytes of time stamp with 24 random bytes at end of name)
> > > > > > > # Files info: size 0 bytes, written with an IO size of 16384 bytes per write
> > > > > > > # App overhead is time in microseconds spent in the test not doing file writing related system calls.
> > > > > > > #
> > > > > > > FSUse% Count Size Files/sec App Overhead
> > > > > > > 1 1600000 0 4300.1 20745838
> > > > > > > 3 3200000 0 4239.9 23849857
> > > > > > > 5 4800000 0 4243.4 25939543
> > > > > > > 6 6400000 0 4248.4 19514050
> > > > > > > 8 8000000 0 4262.1 20796169
> > > > > > > 9 9600000 0 4257.6 21288675
> > > > > > > 11 11200000 0 4259.7 19375120
> > > > > > > 13 12800000 0 4220.7 22734141
> > > > > > > 14 14400000 0 4238.5 31936458
> > > > > > > 16 16000000 0 4231.5 23409901
> > > > > > > 18 17600000 0 4045.3 23577700
> > > > > > > 19 19200000 0 2783.4 58299526
> > > > > > > 21 20800000 0 2678.2 40616302
> > > > > > > 23 22400000 0 2693.5 83973996
> > > > > > > Ctrl+C because it just took too long.
> > > > > >
> > > > > > Try running it on a larger filesystem, or configure the fs with more
> > > > > > AGs and a larger log (i.e. mkfs.xfs -f -dagcount=24 -l size=512m
> > > > > > <dev>). That will speed up modifications and increase concurrency.
> > > > > > This test should be able to run 5-10x faster than this (it
> > > > > > sustains 55,000 files/s @ 300MB/s write on my test fs on a cheap
> > > > > > SSD).
> > > > >
> > > > > Will add more memory to the machine. Will report back on that.
> > > >
> > > > increasing the memory to 1G didn't help. So I've tried to add
> > > > -dagcount=24 -l size=512m and that didn't help much either. I am at 5k
> > > > files/s so nowhere close to your 55k. I thought this is more about CPUs
> > > > count than about the amount of memory. So I've tried a larger machine
> > > > with 24 CPUs (no dagcount etc...), this one doesn't have a fast storage
> > > > so I've backed the fs image by ramdisk but even then I am getting very
> > > > similar results. No idea what is wrong with my kvm setup.
> > >
> > > What's the backing storage? I use an image file in an XFS
> > > filesystem, configured with virtio,cache=none so it's concurrency
> > > model matches that of a real disk...
> >
> > I am using qcow qemu image exported to qemu by
> > -drive file=storage.img,if=ide,index=1,cache=none
> > parameter.
>
> storage.img is on what type of filesystem?
ext3 on the host system
> Only XFs will give you
> proper IO concurrency with direct IO, and you really need to use a
> raw image file rather than qcow2. If you're not using the special
> capabilities of qcow2 (e.g. snapshots), there's no reason to use
> it...
OK, I will try with the raw image as soon as I have some more time
(hopefully this week).
Thanks
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists