[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <76b12bf9-3eff-c9da-2c8b-2cc31fb937a4@huaweicloud.com>
Date: Tue, 19 Nov 2024 19:49:09 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Mikulas Patocka <mpatocka@...hat.com>, Song Liu <song@...nel.org>
Cc: Genes Lists <lists@...ience.com>, dm-devel@...ts.linux.dev,
Alasdair Kergon <agk@...hat.com>, Mike Snitzer <snitzer@...nel.org>,
linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
linux@...mhuis.info, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: md-raid 6.11.8 page fault oops
Hi,
在 2024/11/18 18:34, Mikulas Patocka 写道:
>
>
> On Fri, 15 Nov 2024, Song Liu wrote:
>
>> + dm folks
>>
>> It appears the crash happens in dm.c:clone_endio. Commit
>> aaa53168cbcc486ca1927faac00bd99e81d4ff04 made some
>> changes to clone_endio, but I haven't looked into it.
>>
>> Thanks,
>> Song
>
> Hi
>
> That commit just adds a test for tio->ti being NULL, so I doubt that it
> caused this error.
The reported address is 0000000000000050, so it's ti is definitely not
NULL, ti->type is just 8 offset. However, target_type->end_io is
exactly 0x50, so the problem is due to ti->type is NULL.
Thanks,
Kuai
>
> Mikulas
>
>
>> On Fri, Nov 15, 2024 at 4:12 AM Genes Lists <lists@...ience.com> wrote:
>>>
>>> md-raid crashed with kernel NULL pointer deref on stable 6.11.8.
>>>
>>> Happened with raid6 while rsync was writing (data was pulled over
>>> network).
>>>
>>> This rsync happens twice every day without a problem. This was the
>>> second run after booting 6.11.8, so will see if/when it happens again -
>>> and if frequent enough to make a bisect possible.
>>>
>>> Nonetheless, reporting now in case it's helpful.
>>>
>>> Full dmesg attached but the interesting part is:
>>>
>>> [33827.216164] BUG: kernel NULL pointer dereference, address:
>>> 0000000000000050
>>> [33827.216183] #PF: supervisor read access in kernel mode
>>> [33827.216193] #PF: error_code(0x0000) - not-present page
>>> [33827.216203] PGD 0 P4D 0
>>> [33827.216211] Oops: Oops: 0000 [#1] PREEMPT SMP PTI
>>> [33827.216221] CPU: 4 UID: 0 PID: 793 Comm: md127_raid6 Not tainted
>>> 6.11.8-stable-1 #21 1400000003000000474e5500ae13c727d476f9ab
>>> [33827.216240] Hardware name: To Be Filled By O.E.M. To Be Filled By
>>> O.E.M./Z370 Extreme4, BIOS P4.20 10/31/2019
>>> [33827.216254] RIP: 0010:clone_endio+0x43/0x1f0 [dm_mod]
>>> [33827.216279] Code: 4c 8b 77 e8 65 48 8b 1c 25 28 00 00 00 48 89 5c 24
>>> 08 48 89 fb 88 44 24 07 4d 85 f6 0f 84 11 01 00 00 49 8b 56 08 4c 8b 6b
>>> e0 <48> 8b 6a 50 4d 8b 65 38 3c 05 0f 84 0b 01 00 00 66 90 48 85 ed 74
>>> [33827.216304] RSP: 0018:ffffb9610101bb40 EFLAGS: 00010282
>>> [33827.216315] RAX: 0000000000000000 RBX: ffff9b15b8c5c598 RCX:
>>> 000000000015000c
>>> [33827.216326] RDX: 0000000000000000 RSI: ffffec17e1944200 RDI:
>>> ffff9b15b8c5c598
>>> [33827.216338] RBP: 0000000000000000 R08: ffff9b1825108c00 R09:
>>> 000000000015000c
>>> [33827.216349] R10: 000000000015000c R11: 00000000ffffffff R12:
>>> ffff9b10da026000
>>> [33827.216360] R13: ffff9b15b8c5c520 R14: ffff9b10ca024440 R15:
>>> ffff9b1474cb33c0
>>> [33827.216372] FS: 0000000000000000(0000) GS:ffff9b185ee00000(0000)
>>> knlGS:0000000000000000
>>> [33827.216385] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [33827.216394] CR2: 0000000000000050 CR3: 00000001f4e22005 CR4:
>>> 00000000003706f0
>>> [33827.216406] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
>>> 0000000000000000
>>> [33827.216417] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
>>> 0000000000000400
>>> [33827.216429] Call Trace:
>>> [33827.216435] <TASK>
>>> [33827.216442] ? __die_body.cold+0x19/0x27
>>> [33827.216453] ? page_fault_oops+0x15a/0x2d0
>>> [33827.216465] ? exc_page_fault+0x7e/0x180
>>> [33827.216475] ? asm_exc_page_fault+0x26/0x30
>>> [33827.216486] ? clone_endio+0x43/0x1f0 [dm_mod
>>> 1400000003000000474e5500e90ca42f094c5280]
>>> [33827.216510] clone_endio+0x120/0x1f0 [dm_mod
>>> 1400000003000000474e5500e90ca42f094c5280]
>>> [33827.216533] md_end_clone_io+0x42/0xa0 [md_mod
>>> 1400000003000000474e55004ac7ec7b1ac1c22c]
>>> [33827.216559] handle_stripe_clean_event+0x1e6/0x430 [raid456
>>> 1400000003000000474e550080acde909728c7a9]
>>> [33827.216583] handle_stripe+0x9a3/0x1c00 [raid456
>>> 1400000003000000474e550080acde909728c7a9]
>>> [33827.216606] handle_active_stripes.isra.0+0x381/0x5b0 [raid456
>>> 1400000003000000474e550080acde909728c7a9]
>>> [33827.216625] ? psi_task_switch+0xb7/0x200
>>> [33827.216637] raid5d+0x450/0x670 [raid456
>>> 1400000003000000474e550080acde909728c7a9]
>>> [33827.216655] ? lock_timer_base+0x76/0xa0
>>> [33827.216666] md_thread+0xa2/0x190 [md_mod
>>> 1400000003000000474e55004ac7ec7b1ac1c22c]
>>> [33827.216689] ? __pfx_autoremove_wake_function+0x10/0x10
>>> [33827.216701] ? __pfx_md_thread+0x10/0x10 [md_mod
>>> 1400000003000000474e55004ac7ec7b1ac1c22c]
>>> [33827.216723] kthread+0xcf/0x100
>>> [33827.216731] ? __pfx_kthread+0x10/0x10
>>> [33827.216740] ret_from_fork+0x31/0x50
>>> [33827.216749] ? __pfx_kthread+0x10/0x10
>>> [33827.216757] ret_from_fork_asm+0x1a/0x30
>>> [33827.216769] </TASK>
>>>
>>> --
>>> Gene
>>>
Powered by blists - more mailing lists