[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5200BB18.9010105@oracle.com>
Date: Tue, 06 Aug 2013 17:00:08 +0800
From: Bob Liu <bob.liu@...cle.com>
To: Krzysztof Kozlowski <k.kozlowski@...sung.com>
CC: Seth Jennings <sjenning@...ux.vnet.ibm.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
Tomasz Stanislawski <t.stanislaws@...sung.com>
Subject: Re: [RFC PATCH 1/4] zbud: use page ref counter for zbud pages
Hi Krzysztof,
On 08/06/2013 02:42 PM, Krzysztof Kozlowski wrote:
> Use page reference counter for zbud pages. The ref counter replaces
> zbud_header.under_reclaim flag and ensures that zbud page won't be freed
> when zbud_free() is called during reclaim. It allows implementation of
> additional reclaim paths.
>
> The page count is incremented when:
> - a handle is created and passed to zswap (in zbud_alloc()),
> - user-supplied eviction callback is called (in zbud_reclaim_page()).
>
> Signed-off-by: Krzysztof Kozlowski <k.kozlowski@...sung.com>
> Signed-off-by: Tomasz Stanislawski <t.stanislaws@...sung.com>
Looks good to me.
Reviewed-by: Bob Liu <bob.liu@...cle.com>
> ---
> mm/zbud.c | 150 +++++++++++++++++++++++++++++++++++--------------------------
> 1 file changed, 86 insertions(+), 64 deletions(-)
>
> diff --git a/mm/zbud.c b/mm/zbud.c
> index ad1e781..a8e986f 100644
> --- a/mm/zbud.c
> +++ b/mm/zbud.c
> @@ -109,7 +109,6 @@ struct zbud_header {
> struct list_head lru;
> unsigned int first_chunks;
> unsigned int last_chunks;
> - bool under_reclaim;
> };
>
> /*****************
> @@ -138,16 +137,9 @@ static struct zbud_header *init_zbud_page(struct page *page)
> zhdr->last_chunks = 0;
> INIT_LIST_HEAD(&zhdr->buddy);
> INIT_LIST_HEAD(&zhdr->lru);
> - zhdr->under_reclaim = 0;
> return zhdr;
> }
>
> -/* Resets the struct page fields and frees the page */
> -static void free_zbud_page(struct zbud_header *zhdr)
> -{
> - __free_page(virt_to_page(zhdr));
> -}
> -
> /*
> * Encodes the handle of a particular buddy within a zbud page
> * Pool lock should be held as this function accesses first|last_chunks
> @@ -188,6 +180,65 @@ static int num_free_chunks(struct zbud_header *zhdr)
> return NCHUNKS - zhdr->first_chunks - zhdr->last_chunks - 1;
> }
>
> +/*
> + * Called after zbud_free() or zbud_alloc().
> + * Checks whether given zbud page has to be:
> + * - removed from buddied/unbuddied/LRU lists completetely (zbud_free).
> + * - moved from buddied to unbuddied list
> + * and to beginning of LRU (zbud_alloc, zbud_free),
> + * - added to buddied list and LRU (zbud_alloc),
> + *
> + * The page must be already removed from buddied/unbuddied lists.
> + * Must be called under pool->lock.
> + */
> +static void rebalance_lists(struct zbud_pool *pool, struct zbud_header *zhdr)
> +{
Nit picker, how about change the name to adjust_lists() or something
like this because we don't do any rebalancing.
--
Regards,
-Bob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists