[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E4C395E.20000@gmail.com>
Date: Wed, 17 Aug 2011 23:57:50 +0200
From: Nebojsa Trpkovic <trx.lists@...il.com>
To: linux-kernel@...r.kernel.org
Subject: cleancache can lead to serious performance degradation
Hello.
I've tried using cleancache on my file server and came to conclusion
that my Core2 Duo 4MB L2 cache 2.33GHz CPU cannot cope with the amount
of data it needs to compress during heavy sequential IO when
cleancache/zcache are enabled.
For an example, with cleancache enabled I get 60-70MB/s from my RAID
arrays and both CPU cores are saturated with system (kernel) time.
Without cleancache, each RAID gives me more then 300MB/s of useful read
throughput.
In the scenario of sequential reading, this drop of throughput seems
completely normal:
- a lot of data gets pulled in from disks
- data is processed in some non CPU-intensive way
- page cache fills up quickly and cleancache starts compressing pages (a
lot of "puts" in /sys/kernel/mm/cleancache/)
- these compressed cleancache pages newer get read because there are a
whole lot of new pages coming in every second replacing old ones
(practically no "succ_gets" in /sys/kernel/mm/cleancache/)
- CPU saturates doing useless compression, and even worse:
- new disk read operations are waiting for CPU to finish compression and
make some space in memory
So, using cleancache in scenarios with a lot of non-random data
throughput can lead to very bad performance degradation.
I guess that possible workaround could be to implement some kind of
compression throttling valve for cleancache/zcache:
- if there's available CPU time (idle cycles or so), then compress
(maybe even with low CPU scheduler priority);
- if there's no available CPU time, just store (or throw away) to avoid
IO waits;
At least, there should be a warning in kernel help about this kind of
situations.
Regards,
Nebojsa Trpkovic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists