[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1571237707.5937.58.camel@lca.pw>
Date: Wed, 16 Oct 2019 10:55:07 -0400
From: Qian Cai <cai@....pw>
To: Tom Murphy <murphyt7@....ie>
Cc: Robin Murphy <robin.murphy@....com>,
Christoph Hellwig <hch@....de>, Joerg Roedel <jroedel@...e.de>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: "Convert the AMD iommu driver to the dma-iommu api" is buggy
Today's linux-next generates a lot of warnings on multiple servers during boot
due to the series "iommu/amd: Convert the AMD iommu driver to the dma-iommu api"
[1]. Reverted the whole things fixed them.
[1] https://lore.kernel.org/lkml/20190908165642.22253-1-murphyt7@tcd.ie/
[ 257.785749][ T6184] BUG: sleeping function called from invalid context at
mm/page_alloc.c:4692
[ 257.794886][ T6184] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid:
6184, name: readelf
[ 257.803574][ T6184] INFO: lockdep is turned off.
[ 257.808233][ T6184] CPU: 86 PID: 6184 Comm: readelf Tainted:
G W 5.4.0-rc3-next-20191016+ #7
[ 257.818035][ T6184] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385
Gen10, BIOS A40 07/10/2019
[ 257.827313][ T6184] Call Trace:
[ 257.830487][ T6184] dump_stack+0x86/0xca
[ 257.834530][ T6184] ___might_sleep.cold.92+0xd2/0x122
[ 257.839708][ T6184] __might_sleep+0x73/0xe0
[ 257.844011][ T6184] __alloc_pages_nodemask+0x442/0x720
[ 257.849274][ T6184] ? __alloc_pages_slowpath+0x18d0/0x18d0
[ 257.854886][ T6184] ? debug_lockdep_rcu_enabled+0x27/0x60
[ 257.860415][ T6184] ? lock_downgrade+0x3c0/0x3c0
[ 257.865156][ T6184] alloc_pages_current+0x9c/0x110
[ 257.870071][ T6184] __get_free_pages+0x12/0x60
[ 257.874634][ T6184] get_zeroed_page+0x16/0x20
[ 257.879112][ T6184] amd_iommu_map+0x504/0x850
[ 257.883588][ T6184] ? amd_iommu_domain_direct_map+0x60/0x60
[ 257.889286][ T6184] ? lockdep_hardirqs_on+0x16/0x2a0
[ 257.894373][ T6184] ? alloc_iova+0x189/0x210
[ 257.898765][ T6184] ? trace_hardirqs_on+0x3a/0x160
[ 257.903677][ T6184] iommu_map+0x1b3/0x4d0
[ 257.907802][ T6184] ? iommu_unmap+0xf0/0xf0
[ 257.912104][ T6184] ? alloc_iova_fast+0x258/0x3d1
[ 257.916929][ T6184] ? create_object+0x4a2/0x540
[ 257.921579][ T6184] iommu_map_sg+0x9d/0x120
[ 257.925882][ T6184] iommu_dma_map_sg+0x2c3/0x450
[ 257.930627][ T6184] scsi_dma_map+0xd7/0x160
[ 257.934936][ T6184] pqi_raid_submit_scsi_cmd_with_io_request+0x392/0x420
[smartpqi]
[ 257.942735][ T6184] ? pqi_alloc_io_request+0x127/0x140 [smartpqi]
[ 257.948962][ T6184] pqi_scsi_queue_command+0x8ab/0xe00 [smartpqi]
[ 257.955192][ T6184] ? pqi_eh_device_reset_handler+0x9c0/0x9c0 [smartpqi]
[ 257.962029][ T6184] ? sd_init_command+0xa25/0x1346 [sd_mod]
[ 257.967730][ T6184] scsi_queue_rq+0xd19/0x1360
[ 257.972298][ T6184] __blk_mq_try_issue_directly+0x295/0x3f0
[ 257.977999][ T6184] ? blk_mq_request_bypass_insert+0xd0/0xd0
[ 257.983787][ T6184] ? debug_lockdep_rcu_enabled+0x27/0x60
[ 257.989312][ T6184] blk_mq_request_issue_directly+0xb5/0x100
[ 257.995098][ T6184] ? blk_mq_flush_plug_list+0x7e0/0x7e0
[ 258.000537][ T6184] ? blk_mq_sched_insert_requests+0xd6/0x380
[ 258.006409][ T6184] ? lock_downgrade+0x3c0/0x3c0
[ 258.011147][ T6184] blk_mq_try_issue_list_directly+0xa9/0x160
[ 258.017023][ T6184] blk_mq_sched_insert_requests+0x228/0x380
[ 258.022810][ T6184] blk_mq_flush_plug_list+0x448/0x7e0
[ 258.028073][ T6184] ? blk_mq_insert_requests+0x380/0x380
[ 258.033516][ T6184] blk_flush_plug_list+0x1eb/0x230
[ 258.038515][ T6184] ? blk_insert_cloned_request+0x1b0/0x1b0
[ 258.044215][ T6184] blk_finish_plug+0x43/0x5d
[ 258.048695][ T6184] read_pages+0xf6/0x3b0
[ 258.052823][ T6184] ? read_cache_pages+0x350/0x350
[ 258.057737][ T6184] ? __page_cache_alloc+0x12c/0x230
[ 258.062826][ T6184] __do_page_cache_readahead+0x346/0x3a0
[ 258.068348][ T6184] ? read_pages+0x3b0/0x3b0
[ 258.072738][ T6184] ? lockdep_hardirqs_on+0x16/0x2a0
[ 258.077928][ T6184] ? __xfs_filemap_fault+0x167/0x4a0 [xfs]
[ 258.083625][ T6184] filemap_fault+0xa13/0xe70
[ 258.088201][ T6184] __xfs_filemap_fault+0x167/0x4a0 [xfs]
[ 258.093731][ T6184] ? kmemleak_alloc+0x57/0x90
[ 258.098397][ T6184] ? xfs_file_read_iter+0x3c0/0x3c0 [xfs]
[ 258.104009][ T6184] ? debug_check_no_locks_freed+0x2c/0xe0
[ 258.109618][ T6184] ? lockdep_init_map+0x8b/0x2b0
[ 258.114543][ T6184] xfs_filemap_fault+0x68/0x70 [xfs]
[ 258.119720][ T6184] __do_fault+0x83/0x220
[ 258.123847][ T6184] __handle_mm_fault+0xd76/0x1f40
[ 258.128757][ T6184] ? __pmd_alloc+0x280/0x280
[ 258.133231][ T6184] ? debug_lockdep_rcu_enabled+0x27/0x60
[ 258.138755][ T6184] ? handle_mm_fault+0x178/0x4c0
[ 258.143581][ T6184] ? lockdep_hardirqs_on+0x16/0x2a0
[ 258.148674][ T6184] ? __do_page_fault+0x29c/0x640
[ 258.153501][ T6184] handle_mm_fault+0x205/0x4c0
[ 258.158151][ T6184] __do_page_fault+0x29c/0x640
[ 258.162800][ T6184] do_page_fault+0x50/0x37f
[ 258.167189][ T6184] page_fault+0x34/0x40
[ 258.171228][ T6184] RIP: 0010:__clear_user+0x3b/0x70
[ 183.553150] BUG: sleeping function called from invalid context at
drivers/iommu/iommu.c:1950
[ 183.562306] in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 1486,
name: kworker/0:3
[ 183.571450] 5 locks held by kworker/0:3/1486:
[ 183.576510] #0: 44ff0008000ce128 ((wq_completion)events){+.+.}, at:
process_one_work+0x25c/0x948
[ 183.586110] #1: 43ff00081fb2fcf8 ((work_completion)(&wfc.work)){+.+.}, at:
process_one_work+0x280/0x948
[ 183.596310] #2: ffff000a2c661a08 (&dev->intf_state_mutex){+.+.}, at:
mlx5_load_one+0x68/0x12e0 [mlx5_core]
[ 183.606916] #3: ffff9000127e4560 (irq_domain_mutex){+.+.}, at:
__irq_domain_alloc_irqs+0x1f8/0x430
[ 183.616683] #4: 02ff0095ca0ed8f0 (&(&cookie->msi_lock)->rlock){....}, at:
iommu_dma_prepare_msi+0x70/0x210
[ 183.627146] irq event stamp: 378872
[ 183.631345] hardirqs last enabled at (378871): [<ffff9000109d0230>]
_raw_write_unlock_irqrestore+0x4c/0x84
[ 183.641791] hardirqs last disabled at (378872): [<ffff9000109cf7a0>]
_raw_spin_lock_irqsave+0x38/0x9c
[ 183.651717] softirqs last enabled at (377854): [<ffff9000100824f4>]
__do_softirq+0x864/0x900
[ 183.660951] softirqs last disabled at (377841): [<ffff900010118768>]
irq_exit+0x1c8/0x238
[ 183.669836] CPU: 0 PID: 1486 Comm: kworker/0:3 Tainted:
G W L 5.4.0-rc3-next-20191016+ #8
[ 183.679845] Hardware name: HPE Apollo 70 /C01_APACHE_MB ,
BIOS L50_5.13_1.11 06/18/2019
[ 183.690292] Workqueue: events work_for_cpu_fn
[ 183.695357] Call trace:
[ 183.698510] dump_backtrace+0x0/0x248
[ 183.702878] show_stack+0x20/0x2c
[ 183.706900] dump_stack+0xc8/0x130
[ 183.711009] ___might_sleep+0x314/0x328
[ 183.715551] __might_sleep+0x7c/0xe0
[ 183.719832] iommu_map+0x40/0x70
[ 183.723766] iommu_dma_prepare_msi+0x16c/0x210
[ 183.728916] its_irq_domain_alloc+0x100/0x254
[ 183.733979] irq_domain_alloc_irqs_parent+0x74/0x90
[ 183.739562] msi_domain_alloc+0xa0/0x170
[ 183.744190] __irq_domain_alloc_irqs+0x228/0x430
[ 183.749512] msi_domain_alloc_irqs+0x130/0x548
[ 183.754663] pci_msi_setup_msi_irqs+0x64/0x74
[ 183.759726] __pci_enable_msix_range+0x52c/0x878
[ 183.765049] pci_alloc_irq_vectors_affinity+0x94/0x168
[ 183.771028] mlx5_irq_table_create+0x178/0x748 [mlx5_core]
[ 183.777353] mlx5_load_one+0x710/0x12e0 [mlx5_core]
[ 183.783069] init_one+0x514/0x898 [mlx5_core]
[ 183.788134] local_pci_probe+0x74/0xcc
[ 183.792589] work_for_cpu_fn+0x30/0x4c
[ 183.797045] process_one_work+0x4f4/0x948
[ 183.801760] process_scheduled_works+0x34/0x54
[ 183.806909] worker_thread+0x348/0x4bc
[ 183.811364] kthread+0x1cc/0x1e8
[ 183.815299] ret_from_fork+0x10/0x18
[ 184.621631] mlx5_core 0000:0b:00.1: Port module event: module 1, Cable
unplugged
[ 184.867367] mlx5_core 0000:0b:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256)
RxCqeCmprss(0)
[ 186.181802] mlx5_core 0000:0b:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256)
RxCqeCmprss(0)
Powered by blists - more mailing lists