[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod414sVwwKg0KAsHC2vhqdkrzLeQ+nV3wiAKvOoFyu8NAQ@mail.gmail.com>
Date: Fri, 2 Aug 2019 07:58:00 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Henry Burns <henryburns@...gle.com>
Cc: Minchan Kim <minchan@...nel.org>, Nitin Gupta <ngupta@...are.org>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Jonathan Adams <jwadams@...gle.com>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm/zsmalloc.c: Migration can leave pages in ZS_EMPTY indefinitely
On Thu, Aug 1, 2019 at 6:53 PM Henry Burns <henryburns@...gle.com> wrote:
>
> In zs_page_migrate() we call putback_zspage() after we have finished
> migrating all pages in this zspage. However, the return value is ignored.
> If a zs_free() races in between zs_page_isolate() and zs_page_migrate(),
> freeing the last object in the zspage, putback_zspage() will leave the page
> in ZS_EMPTY for potentially an unbounded amount of time.
>
> To fix this, we need to do the same thing as zs_page_putback() does:
> schedule free_work to occur. To avoid duplicated code, move the
> sequence to a new putback_zspage_deferred() function which both
> zs_page_migrate() and zs_page_putback() call.
>
> Signed-off-by: Henry Burns <henryburns@...gle.com>
Reviewed-by: Shakeel Butt <shakeelb@...gle.com>
> ---
> mm/zsmalloc.c | 30 ++++++++++++++++++++----------
> 1 file changed, 20 insertions(+), 10 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 1cda3fe0c2d9..efa660a87787 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1901,6 +1901,22 @@ static void dec_zspage_isolation(struct zspage *zspage)
> zspage->isolated--;
> }
>
> +static void putback_zspage_deferred(struct zs_pool *pool,
> + struct size_class *class,
> + struct zspage *zspage)
> +{
> + enum fullness_group fg;
> +
> + fg = putback_zspage(class, zspage);
> + /*
> + * Due to page_lock, we cannot free zspage immediately
> + * so let's defer.
> + */
> + if (fg == ZS_EMPTY)
> + schedule_work(&pool->free_work);
> +
> +}
> +
> static void replace_sub_page(struct size_class *class, struct zspage *zspage,
> struct page *newpage, struct page *oldpage)
> {
> @@ -2070,7 +2086,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
> * the list if @page is final isolated subpage in the zspage.
> */
> if (!is_zspage_isolated(zspage))
> - putback_zspage(class, zspage);
> + putback_zspage_deferred(pool, class, zspage);
>
> reset_page(page);
> put_page(page);
> @@ -2115,15 +2131,9 @@ static void zs_page_putback(struct page *page)
>
> spin_lock(&class->lock);
> dec_zspage_isolation(zspage);
> - if (!is_zspage_isolated(zspage)) {
> - fg = putback_zspage(class, zspage);
> - /*
> - * Due to page_lock, we cannot free zspage immediately
> - * so let's defer.
> - */
> - if (fg == ZS_EMPTY)
> - schedule_work(&pool->free_work);
> - }
> + if (!is_zspage_isolated(zspage))
> + putback_zspage_deferred(pool, class, zspage);
> +
> spin_unlock(&class->lock);
> }
>
> --
> 2.22.0.770.g0f2c4a37fd-goog
>
Powered by blists - more mailing lists