lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Apr 2010 14:18:30 +0300
From:	Tomas Winkler <tomasw@...il.com>
To:	Kay Sievers <kay.sievers@...y.org>
Cc:	Greg KH <greg@...ah.com>,
	Johannes Berg <johannes@...solutions.net>,
	David Woodhouse <dwmw2@...radead.org>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Emmanuel Grumbach <egrumbach@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: request_firmware API exhaust memory

On Mon, Apr 26, 2010 at 6:19 PM, Kay Sievers <kay.sievers@...y.org> wrote:
> On Mon, 2010-04-26 at 12:38 +0200, Kay Sievers wrote:
>> On Sun, Apr 25, 2010 at 22:09, Tomas Winkler <tomasw@...il.com> wrote:
>> > Said thing is that I don't see where the memory goes.... Anyhow I will
>> > try to run valgrin on udev just to be sure.
>>
>> Nah, that memory would be freed, if you kill all udev processes, which
>> it doesn't.
>>
>> The many udev worker processes you see for a few seconds was caused by
>> udevd handling events with TIMEOUT= set special. We need to make sure,
>> that firmware events run immediately and don't wait for other
>> processes to finish. The logic who does that was always creating a new
>> worker. I changed this now, but this will not affect the underlying
>> problem you are seeing, it will just make the udev workers not grow in
>> a timeframe of less than 10 seconds. The change is here:
>>   http://git.kernel.org/?p=linux/hotplug/udev.git;a=commit;h=665ee17def2caa6811ae032ae68ebf8239a18cf8
>> but as mentioned, this change is unrelated to the memory leak you are seeing.
>>
>> > I'll be glad If someone can run my simple driver I posted and confirm
>> > that sees the same problem
>>
>> I can confirm that memory gets lost. I suspect for some reason the
>> firmware does not get properly cleaned up. If you increase the size of
>> the firmware image, it will leak memory much faster.
>
> I guess, the assumption that vfree() will free pages which are allocated
> by custom code, and not by vmalloc(), is not true.
>
> The attached changes seem to fix the issue for me.
>
> The custom page array mangling was introduced by David as an optimization
> with commit 6e03a201bbe8137487f340d26aa662110e324b20 and this should be
> checked, and if needed be fixed.
>
> Cheers,
> Kay
>
>
> diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
> index 985da11..fe4e872 100644
> --- a/drivers/base/firmware_class.c
> +++ b/drivers/base/firmware_class.c
> @@ -162,7 +162,7 @@ static ssize_t firmware_loading_store(struct device *dev,
>                        mutex_unlock(&fw_lock);
>                        break;
>                }
> -               vfree(fw_priv->fw->data);
> +               vunmap(fw_priv->fw->data);
>                fw_priv->fw->data = NULL;
>                for (i = 0; i < fw_priv->nr_pages; i++)
>                        __free_page(fw_priv->pages[i]);
> @@ -176,7 +176,7 @@ static ssize_t firmware_loading_store(struct device *dev,
>                break;
>        case 0:
>                if (test_bit(FW_STATUS_LOADING, &fw_priv->status)) {
> -                       vfree(fw_priv->fw->data);
> +                       vunmap(fw_priv->fw->data);
>                        fw_priv->fw->data = vmap(fw_priv->pages,
>                                                 fw_priv->nr_pages,
>                                                 0, PAGE_KERNEL_RO);
> @@ -184,9 +184,6 @@ static ssize_t firmware_loading_store(struct device *dev,
>                                dev_err(dev, "%s: vmap() failed\n", __func__);
>                                goto err;
>                        }
> -                       /* Pages will be freed by vfree() */
> -                       fw_priv->page_array_size = 0;
> -                       fw_priv->nr_pages = 0;
>                        complete(&fw_priv->completion);
>                        clear_bit(FW_STATUS_LOADING, &fw_priv->status);
>                        break;
> @@ -578,7 +575,7 @@ release_firmware(const struct firmware *fw)
>                        if (fw->data == builtin->data)
>                                goto free_fw;
>                }
> -               vfree(fw->data);
> +               vunmap(fw->data);
>        free_fw:
>                kfree(fw);
>        }
>

The difference between vfree and vunmap is that vfree request for
deallocating the pages while vunmap leaves the  pages allocated. I
don't think you can replace vfree with vunmap the way you did.
The transition from vmalloc to alloc_pages were done by the patch
bellow.  I think this needs some more review.

commit 6e03a201bbe8137487f340d26aa662110e324b20
Author: David Woodhouse <dwmw2@...radead.org>
Date:   Thu Apr 9 22:04:07 2009 -0700

    firmware: speed up request_firmware(), v3

    Rather than calling vmalloc() repeatedly to grow the firmware image as
    we receive data from userspace, just allocate and fill individual pages.
    Then vmap() the whole lot in one go when we're done.

    A quick test with a 337KiB iwlagn firmware shows the time taken for
    request_firmware() going from ~32ms to ~5ms after I apply this patch.

    [v2: define PAGE_KERNEL_RO as PAGE_KERNEL where necessary, use min_t()]
    [v3: kunmap() takes the struct page *, not the virtual address]

    Signed-off-by: David Woodhouse <David.Woodhouse@...el.com>
    Tested-by: Sachin Sant <sachinp@...ibm.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ