[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MN2PR21MB1375C1C333928779BC4978ECCA350@MN2PR21MB1375.namprd21.prod.outlook.com>
Date: Mon, 13 Jan 2020 19:55:45 +0000
From: Haiyang Zhang <haiyangz@...rosoft.com>
To: Mohammed Gamal <mgamal@...hat.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: KY Srinivasan <kys@...rosoft.com>,
"sashal@...nel.org" <sashal@...nel.org>,
vkuznets <vkuznets@...hat.com>, cavery <cavery@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] hv_netvsc: Fix memory leak when removing rndis device
> -----Original Message-----
> From: Mohammed Gamal <mgamal@...hat.com>
> Sent: Monday, January 13, 2020 2:28 PM
> To: linux-hyperv@...r.kernel.org; Stephen Hemminger
> <sthemmin@...rosoft.com>; Haiyang Zhang <haiyangz@...rosoft.com>;
> netdev@...r.kernel.org
> Cc: KY Srinivasan <kys@...rosoft.com>; sashal@...nel.org; vkuznets
> <vkuznets@...hat.com>; cavery <cavery@...hat.com>; linux-
> kernel@...r.kernel.org; Mohammed Gamal <mgamal@...hat.com>
> Subject: [PATCH] hv_netvsc: Fix memory leak when removing rndis device
>
> kmemleak detects the following memory leak when hot removing a network
> device:
>
> unreferenced object 0xffff888083f63600 (size 256):
> comm "kworker/0:1", pid 12, jiffies 4294831717 (age 1113.676s)
> hex dump (first 32 bytes):
> 00 40 c7 33 80 88 ff ff 00 00 00 00 10 00 00 00 .@..............
> 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
> backtrace:
> [<00000000d4a8f5be>] rndis_filter_device_add+0x117/0x11c0 [hv_netvsc]
> [<000000009c02d75b>] netvsc_probe+0x5e7/0xbf0 [hv_netvsc]
> [<00000000ddafce23>] vmbus_probe+0x74/0x170 [hv_vmbus]
> [<00000000046e64f1>] really_probe+0x22f/0xb50
> [<000000005cc35eb7>] driver_probe_device+0x25e/0x370
> [<0000000043c642b2>] bus_for_each_drv+0x11f/0x1b0
> [<000000005e3d09f0>] __device_attach+0x1c6/0x2f0
> [<00000000a72c362f>] bus_probe_device+0x1a6/0x260
> [<0000000008478399>] device_add+0x10a3/0x18e0
> [<00000000cf07b48c>] vmbus_device_register+0xe7/0x1e0 [hv_vmbus]
> [<00000000d46cf032>] vmbus_add_channel_work+0x8ab/0x1770 [hv_vmbus]
> [<000000002c94bb64>] process_one_work+0x919/0x17d0
> [<0000000096de6781>] worker_thread+0x87/0xb40
> [<00000000fbe7397e>] kthread+0x333/0x3f0
> [<000000004f844269>] ret_from_fork+0x3a/0x50
>
> rndis_filter_device_add() allocates an instance of struct rndis_device which
> never gets deallocated and rndis_filter_device_remove() sets net_device-
> >extension which points to the rndis_device struct to NULL without ever freeing
> the structure first, leaving it dangling.
>
> This patch fixes this by freeing the structure before setting net_device-
> >extension to NULL
>
> Signed-off-by: Mohammed Gamal <mgamal@...hat.com>
> ---
> drivers/net/hyperv/rndis_filter.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
> index 857c4bea451c..d2e094f521a4 100644
> --- a/drivers/net/hyperv/rndis_filter.c
> +++ b/drivers/net/hyperv/rndis_filter.c
> @@ -1443,6 +1443,7 @@ void rndis_filter_device_remove(struct hv_device
> *dev,
> /* Halt and release the rndis device */
> rndis_filter_halt_device(net_dev, rndis_dev);
>
> + kfree(rndis_dev);
> net_dev->extension = NULL;
The struct rndis_device *should* be freed in free_netvsc_device_rcu()
==> free_netvsc_device():
static void free_netvsc_device(struct rcu_head *head)
{
struct netvsc_device *nvdev
= container_of(head, struct netvsc_device, rcu);
int i;
kfree(nvdev->extension);
So we no longer free it in the rndis_filter_device_remove().
But, the commit 02400fcee2542ee334a2394e0d9f6efd969fe782 did
have a bug:
Date: Tue, 20 Mar 2018 15:03:03 -0700
[PATCH] hv_netvsc: use RCU to fix concurrent rx and queue changes
It should have removed the following line when moving the free() to
free_netvsc_device():
> net_dev->extension = NULL;
Then, the leak of rndis_dev will be fixed. I suggested you to fix it in this
way.
Thanks,
- Haiyang
Powered by blists - more mailing lists