[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-id: <1378889944-23192-1-git-send-email-k.kozlowski@samsung.com>
Date: Wed, 11 Sep 2013 10:58:59 +0200
From: Krzysztof Kozlowski <k.kozlowski@...sung.com>
To: Seth Jennings <sjenning@...ux.vnet.ibm.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Bob Liu <bob.liu@...cle.com>, Mel Gorman <mgorman@...e.de>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
Dave Hansen <dave.hansen@...el.com>,
Minchan Kim <minchan@...nel.org>,
Krzysztof Kozlowski <k.kozlowski@...sung.com>
Subject: [PATCH v2 0/5] mm: migrate zbud pages
Hi,
Currently zbud pages are not movable and they cannot be allocated from CMA
(Contiguous Memory Allocator) region. These patches add migration of zbud pages.
The zbud migration code utilizes mapping so many exceptions to migrate
code were added. This can be replaced for example with pin page
control subsystem:
http://article.gmane.org/gmane.linux.kernel.mm/105308
In such case the zbud migration code (zbud_migrate_page()) can be safely
re-used.
Patch "[PATCH 3/5] mm: use mapcount for identifying zbud pages" introduces
PageZbud() function which identifies zbud pages by page->_mapcount.
Dave Hansen proposed aliasing PG_zbud=PG_slab but in such case patch
would be more intrusive.
Any ideas for a better solution are welcome.
Patch "[PATCH 4/5] mm: use indirect zbud handle and radix tree" changes zbud
handle to support migration. Now the handle is an index in radix tree and
zbud_map() maps it to a proper virtual address. This exposes race conditions,
some of them are discussed already here:
http://article.gmane.org/gmane.linux.kernel.mm/105988
Races are fixed by adding internal map count for each zbud handle.
The map count is increased on each zbud_map() call.
Some races between writeback and invalidate still exist. In such case a message
can be seen in logs:
zbud: error: could not lookup handle 13810 in tree
Patches from discussion above may resolve this.
I have considered using "pgoff_t offset" as handle but it prevented storing
duplicate pages in zswap.
This patch set is based on v3.11.
Changes since v1:
-----------------
1. Rebased against v3.11.
2. Updated documentation of zbud_reclaim_page() to match usage of zbud page
reference counters.
3. Split from patch 2/4 trivial change of scope of freechunks var to separate
patch (3/5) (suggested by Seth Jennings).
Changes and relation to patches "reclaiming zbud pages on migration and
compaction":
-----------------
This is continuation of my previous work: reclaiming zbud pages on migration
and compaction. However current solution is completely different so I am not
attaching previous changelog.
Previous patches can be found here:
* [RFC PATCH v2 0/4] mm: reclaim zbud pages on migration and compaction
http://article.gmane.org/gmane.linux.kernel.mm/105153
* [RFC PATCH 0/4] mm: reclaim zbud pages on migration and compaction
http://article.gmane.org/gmane.linux.kernel.mm/104801
One patch from previous work is re-used along with minor changes:
"[PATCH 1/4] zbud: use page ref counter for zbud pages"
* Add missing spin_unlock in zbud_reclaim_page().
* Decrease pool->pages_nr in zbud_free(), not when putting page. This also
removes the need of holding lock while call to put_zbud_page().
Best regards,
Krzysztof Kozlowski
Krzysztof Kozlowski (5):
zbud: use page ref counter for zbud pages
zbud: make freechunks a block local variable
mm: use mapcount for identifying zbud pages
mm: use indirect zbud handle and radix tree
mm: migrate zbud pages
include/linux/mm.h | 23 ++
include/linux/zbud.h | 3 +-
mm/compaction.c | 7 +
mm/migrate.c | 17 +-
mm/zbud.c | 573 ++++++++++++++++++++++++++++++++++++++++----------
mm/zswap.c | 28 ++-
6 files changed, 537 insertions(+), 114 deletions(-)
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists