[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5472BF95.2000904@redhat.com>
Date: Mon, 24 Nov 2014 13:18:13 +0800
From: Jason Wang <jasowang@...hat.com>
To: Dexuan Cui <decui@...rosoft.com>, gregkh@...uxfoundation.org,
linux-kernel@...r.kernel.org,
driverdev-devel@...uxdriverproject.org, olaf@...fle.de,
apw@...onical.com, kys@...rosoft.com
CC: haiyangz@...rosoft.com
Subject: Re: [PATCH] hv: hv_balloon: avoid memory leak on alloc_error of 2MB
memory block
On 11/24/2014 01:56 PM, Dexuan Cui wrote:
> If num_ballooned is not 0, we shouldn't neglect the already-allocated 2MB
> memory block(s).
>
> Cc: K. Y. Srinivasan <kys@...rosoft.com>
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Dexuan Cui <decui@...rosoft.com>
> ---
> drivers/hv/hv_balloon.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index 5e90c5d..cba2d3b 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -1091,6 +1091,8 @@ static void balloon_up(struct work_struct *dummy)
> bool done = false;
> int i;
>
> + /* The host does balloon_up in 2MB. */
> + WARN_ON(num_pages % PAGES_IN_2M != 0);
>
> /*
> * We will attempt 2M allocations. However, if we fail to
> @@ -1111,7 +1113,7 @@ static void balloon_up(struct work_struct *dummy)
> bl_resp, alloc_unit,
> &alloc_error);
>
> - if ((alloc_error) && (alloc_unit != 1)) {
> + if (alloc_error && (alloc_unit != 1) && num_ballooned == 0) {
> alloc_unit = 1;
> continue;
> }
Before the change, we may retry the 4K allocation when part or all 2M
allocations were failed. This makes sense when memory is fragmented. But
after the change, if part of 2M allocation were failed, we won't retry
4K allocation. Is this expected?
Btw, can host just require 1M? If yes, should alloc_balloon_pages() set
alloc_error if num_pages < alloc_unit for caller to catch this and retry
4K allocation?
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists