[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2CD72098-08E2-4CAA-B74D-D8C44D318117@vmware.com>
Date: Wed, 19 Jul 2023 10:25:28 +0000
From: Ajay Kaher <akaher@...are.com>
To: Steven Rostedt <rostedt@...dmis.org>
CC: "shuah@...nel.org" <shuah@...nel.org>,
"mhiramat@...nel.org" <mhiramat@...nel.org>,
Ching-lin Yu <chinglinyu@...gle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"linux-trace-kernel@...r.kernel.org"
<linux-trace-kernel@...r.kernel.org>,
"lkp@...el.com" <lkp@...el.com>, Nadav Amit <namit@...are.com>,
"oe-lkp@...ts.linux.dev" <oe-lkp@...ts.linux.dev>,
Alexey Makhalov <amakhalov@...are.com>,
"er.ajay.kaher@...il.com" <er.ajay.kaher@...il.com>,
"srivatsa@...il.mit.edu" <srivatsa@...il.mit.edu>,
Tapas Kundu <tkundu@...are.com>,
Vasavi Sirnapalli <vsirnapalli@...are.com>
Subject: Re: [PATCH v4 00/10] tracing: introducing eventfs
> On 18-Jul-2023, at 7:10 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
>
> !! External Email
>
> On Sun, 16 Jul 2023 17:32:35 +0000
> Ajay Kaher <akaher@...are.com> wrote:
>
>> Thanks Steve, hopefully I will fix all the pending nits in v5.
>> Here is the checkpatch.pl report:
>
> Hold off on v5. I hit the following on v4:
OK.
>
> [ 220.170527] BUG: unable to handle page fault for address: fffffffffffffff0
> [ 220.172792] #PF: supervisor read access in kernel mode
> [ 220.174618] #PF: error_code(0x0000) - not-present page
> [ 220.176516] PGD 13104d067 P4D 13104d067 PUD 13104f067 PMD 0
> [ 220.178559] Oops: 0000 [#1] PREEMPT SMP PTI
> [ 220.180087] CPU: 3 PID: 35 Comm: kworker/u8:1 Not tainted 6.5.0-rc1-test-00021-gdd6e7af33766-dirty #15
> [ 220.183441] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> [ 220.186629] Workqueue: events_unbound eventfs_workfn
> [ 220.188286] RIP: 0010:eventfs_set_ef_status_free+0x17/0x40
> [ 220.190091] Code: 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 0f 1f 00 0f 1f 44 00 00 48 8b 47 18 48 8b 40 30 48 83 f8 10 74 1b <f6> 40 f0 02 74 15 48 8b 47 78 48 85 c0 74 0c c6 40 5a 00 48 c7 40
> [ 220.195360] RSP: 0018:ffffa731c0147e20 EFLAGS: 00010287
> [ 220.196802] RAX: 0000000000000000 RBX: ffff97ca512ca000 RCX: 0000000000000000
> [ 220.198703] RDX: 0000000000000001 RSI: ffff97ca52d18010 RDI: ffff97ca512ca000
> [ 220.200540] RBP: ffff97ca52cb3780 R08: 0000000000000064 R09: 00000000802a0022
> [ 220.202324] R10: 0000000000039e80 R11: ffff97cabffd5000 R12: ffff97ca512ca058
> [ 220.204012] R13: ffff97ca52cb3780 R14: ffff97ca40153705 R15: ffffffffad5c1848
> [ 220.205685] FS: 0000000000000000(0000) GS:ffff97cabbd80000(0000) knlGS:0000000000000000
> [ 220.207476] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 220.208764] CR2: fffffffffffffff0 CR3: 000000010a01a001 CR4: 0000000000170ee0
> [ 220.210342] Call Trace:
> [ 220.210879] <TASK>
> [ 220.211359] ? __die+0x23/0x70
> [ 220.212036] ? page_fault_oops+0xa4/0x180
> [ 220.212904] ? exc_page_fault+0xf6/0x190
> [ 220.213738] ? asm_exc_page_fault+0x26/0x30
> [ 220.214586] ? eventfs_set_ef_status_free+0x17/0x40
> [ 220.216081] tracefs_dentry_iput+0x39/0x50
> [ 220.217370] __dentry_kill+0xdc/0x170
> [ 220.218581] dput+0x142/0x310
> [ 220.219647] eventfs_workfn+0x42/0x70
> [ 220.220805] process_one_work+0x1e2/0x3e0
> [ 220.222031] worker_thread+0x1da/0x390
> [ 220.223204] ? __pfx_worker_thread+0x10/0x10
> [ 220.224476] kthread+0xf7/0x130
> [ 220.225543] ? __pfx_kthread+0x10/0x10
> [ 220.226735] ret_from_fork+0x2c/0x50
> [ 220.227898] </TASK>
> [ 220.228792] Modules linked in:
> [ 220.229860] CR2: fffffffffffffff0
> [ 220.230960] ---[ end trace 0000000000000000 ]---
>
>
> I think I know the issue, and looking to see if I can fix it.
- Is it also reproducible on v3?
- Is it manually reproducible or reproducible using any specific script?
Let me know if I can help.
-Ajay
Powered by blists - more mailing lists