[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <47B7036B26F82C43B9F7EAE984D54CB459DB3C@ALA-MBD.corp.ad.wrs.com>
Date: Mon, 9 Dec 2019 07:43:18 +0000
From: "Yin, Xiaotao" <Xiaotao.Yin@...driver.com>
To: "Yin, Xiaotao" <Xiaotao.Yin@...driver.com>,
"joro@...tes.org" <joro@...tes.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Hao, Kexin" <Kexin.Hao@...driver.com>
Subject: RE: [PATCH] iommu/iova: kmemleak when pfn_lo equals IOVA_ANCHOR
Please ignore this one, I'll send v2.
Br.
-----Original Message-----
From: Xiaotao Yin <xiaotao.yin@...driver.com>
Sent: Monday, December 9, 2019 3:16 PM
To: joro@...tes.org; iommu@...ts.linux-foundation.org
Cc: linux-kernel@...r.kernel.org; Hao, Kexin <Kexin.Hao@...driver.com>; Yin, Xiaotao <Xiaotao.Yin@...driver.com>
Subject: [PATCH] iommu/iova: kmemleak when pfn_lo equals IOVA_ANCHOR
During ethernet(Marvell octeontx2) set ring buffer test:
ethtool -G eth1 rx <rx ring size> tx <tx ring size> following kmemleak will happen sometimes:
unreferenced object 0xffff000b85421340 (size 64):
comm "ethtool", pid 867, jiffies 4295323539 (age 550.500s)
hex dump (first 64 bytes):
80 13 42 85 0b 00 ff ff ff ff ff ff ff ff ff ff ..B.............
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<000000001b204ddf>] kmem_cache_alloc+0x1b0/0x350
[<00000000d9ef2e50>] alloc_iova+0x3c/0x168
[<00000000ea30f99d>] alloc_iova_fast+0x7c/0x2d8
[<00000000b8bb2f1f>] iommu_dma_alloc_iova.isra.0+0x12c/0x138
[<000000002f1a43b5>] __iommu_dma_map+0x8c/0xf8
[<00000000ecde7899>] iommu_dma_map_page+0x98/0xf8
[<0000000082004e59>] otx2_alloc_rbuf+0xf4/0x158
[<000000002b107f6b>] otx2_rq_aura_pool_init+0x110/0x270
[<00000000c3d563c7>] otx2_open+0x15c/0x734
[<00000000a2f5f3a8>] otx2_dev_open+0x3c/0x68
[<00000000456a98b5>] otx2_set_ringparam+0x1ac/0x1d4
[<00000000f2fbb819>] dev_ethtool+0xb84/0x2028
[<0000000069b67c5a>] dev_ioctl+0x248/0x3a0
[<00000000af38663a>] sock_ioctl+0x280/0x638
[<000000002582384c>] do_vfs_ioctl+0x8b0/0xa80
[<000000004e1a2c02>] ksys_ioctl+0x84/0xb8
The reason:
When alloc_iova_mem() without initial with Zero, sometimes fpn_lo will equal to IOVA_ANCHOR by chance, so when return from __alloc_and_insert_iova_range() with -ENOMEM(iova32_full), the new_iova will not be freed in free_iova_mem().
Fixes: bb68b2fbfbd6 ("iommu/iova: Add rbtree anchor node")
Signed-off-by: Xiaotao Yin <xiaotao.yin@...driver.com>
---
drivers/iommu/iova.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 41c605b0058f..2c27a661632c 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -233,7 +233,7 @@ static DEFINE_MUTEX(iova_cache_mutex);
struct iova *alloc_iova_mem(void)
{
- return kmem_cache_alloc(iova_cache, GFP_ATOMIC);
+ return kmem_cache_alloc(iova_cache, GFP_ATOMIC | __GFP_ZERO);
}
EXPORT_SYMBOL(alloc_iova_mem);
--
2.17.1
Powered by blists - more mailing lists