[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0047a398-3246-9363-5092-adeab6a8b2b9@deltatee.com>
Date: Tue, 22 Sep 2020 09:38:37 -0600
From: Logan Gunthorpe <logang@...tatee.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>, Tom Murphy <murphyt7@....ie>,
David Woodhouse <dwmw2@...radead.org>,
Christoph Hellwig <hch@...radead.org>
Cc: Ashok Raj <ashok.raj@...el.com>, Intel-gfx@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org
Subject: Re: [Intel-gfx] [PATCH v3 0/6] Convert the intel iommu driver to the
dma-iommu api
On 2020-09-21 6:24 p.m., Lu Baolu wrote:
>>>> I'm trying to test this on an old Sandy Bridge, but found that I get
>>>> spammed with warnings on boot. I've put a sample of a few of them below.
>>>> They all seem to be related to ioat.
>>>>
>>>> I had the same issue with Tom's v2 but never saw this on his v1.
>>>
>>> Have you verified whether this could be reproduced with the lasted
>>> upstream kernel (without this patch series)?
>>
>> Yes.
>
> I am sorry. Just want to make sure I understand you correctly. :-) When
> you say "yes", do you mean you could reproduce this with pure upstream
> kernel (5.9-rc6)?
I mean I've verified the bug does *not* exist without this patch set.
>
>> Also, it's hitting a warning in the dma-iommu code which would not
>> be hit without this patch set.
>
> Without this series, DMA APIs don't go through dma-iommu. Do you mind
> posting the warning message?
It was on my original email but here it is again:
[ 44.057877] ------------[ cut here ]------------
[ 44.063167] WARNING: CPU: 4 PID: 84 at drivers/iommu/dma-iommu.c:496
__iommu_dma_unmap+0x115/0x130
[ 44.073351] Modules linked in:
[ 44.076882] CPU: 4 PID: 84 Comm: kworker/4:1 Tainted: G W
5.9.0-rc4-00006-g110da1e703a2 #216
[ 44.087935] Hardware name: Supermicro SYS-7047GR-TRF/X9DRG-QF, BIOS
3.0a 12/05/2013
[ 44.096650] Workqueue: events work_for_cpu_fn
[ 44.101644] RIP: 0010:__iommu_dma_unmap+0x115/0x130
[ 44.107225] Code: 48 8b 0c 24 48 c7 44 24 10 00 00 00 00 48 c7 44 24
18 00 00 00 00 48 c7 44 24 20 00 00 00 00 48 c7 44 24 08 ff ff ff ff eb
8d <0f> 0b e9 74 ff ff ff e8 1f 36 66 00 66 66 2e 0f 1f 84 00 00 00 00
[ 44.128443] RSP: 0000:ffffc90002397c38 EFLAGS: 00010206
[ 44.134413] RAX: 0000000000200000 RBX: 0000000000080000 RCX:
0000000000000000
[ 44.144487] RDX: 0000000000000403 RSI: ffffffff82a7fb20 RDI:
ffff888477804f30
[ 44.152613] RBP: 00000000fec00000 R08: 00000000000fec00 R09:
00000000000fedff
[ 44.160733] R10: 0000000000000002 R11: 0000000000000004 R12:
ffff888270c39000
[ 44.168861] R13: ffff888472d85ee8 R14: 0000000000080000 R15:
0000000000000000
[ 44.176980] FS: 0000000000000000(0000) GS:ffff888479800000(0000)
knlGS:0000000000000000
[ 44.186198] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 44.192761] CR2: 0000000000000000 CR3: 0000000002a20001 CR4:
00000000000606e0
[ 44.200871] Call Trace:
[ 44.203716] ? lockdep_hardirqs_on_prepare+0xad/0x180
[ 44.209509] iommu_dma_free+0x18/0x30
[ 44.213734] ioat_free_chan_resources+0x19e/0x300
[ 44.219133] ioat_dma_self_test+0x2a0/0x3d0
[ 44.223964] ioat_pci_probe+0x60e/0x903
[ 44.228387] local_pci_probe+0x42/0x80
[ 44.232721] work_for_cpu_fn+0x16/0x20
[ 44.237037] process_one_work+0x292/0x630
[ 44.241644] ? __schedule+0x344/0x970
[ 44.245868] worker_thread+0x227/0x3e0
[ 44.250185] ? process_one_work+0x630/0x630
[ 44.254989] kthread+0x136/0x150
[ 44.258718] ? kthread_use_mm+0x100/0x100
[ 44.263334] ret_from_fork+0x22/0x30
[ 44.267474] irq event stamp: 1881262
[ 44.271602] hardirqs last enabled at (1881272): [<ffffffff811b8465>]
console_unlock+0x435/0x570
[ 44.281593] hardirqs last disabled at (1881281): [<ffffffff811b845b>]
console_unlock+0x42b/0x570
[ 44.291588] softirqs last enabled at (1747140): [<ffffffff817da3e5>]
ioat_cleanup+0x65/0x470
[ 44.301285] softirqs last disabled at (1747144): [<ffffffff817d797a>]
ioat_free_chan_resources+0x6a/0x300
[ 44.312153] ---[ end trace ed0f88fd959a5a39 ]---
Powered by blists - more mailing lists