[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c2bfc960-d86c-b20a-e3eb-7995200a5dd8@gmail.com>
Date: Tue, 26 Jan 2021 08:50:02 +0800
From: brookxu <brookxu.cn@...il.com>
To: Theodore Ts'o <tytso@....edu>
Cc: adilger.kernel@...ger.ca, jack@...e.com,
harshadshirwadkar@...il.com, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 0/4] make jbd2 debug switch per device
Theodore Ts'o wrote on 2021/1/26 5:50:
> On Sat, Jan 23, 2021 at 08:00:42PM +0800, Chunguang Xu wrote:
>> On a multi-disk machine, because jbd2 debugging switch is global, this
>> confuses the logs of multiple disks. It is not easy to distinguish the
>> logs of each disk and the amount of generated logs is very large. Maybe
>> a separate debugging switch for each disk would be better, so that we
>> can easily distinguish the logs of a certain disk.
>>
>> We can enable jbd2 debugging of a device in the following ways:
>> echo X > /proc/fs/jbd2/sdX/jbd2_debug
>>
>> But there is a small disadvantage here. Because the debugging switch is
>> placed in the journal_t object, the log before the object is initialized
>> will be lost. However, usually this will not have much impact on
>> debugging.
>
> The jbd debugging infrastructure dates back to the very beginnings of
> ext3, when Stephen Tweedie added them while he was first implementing
> the jbd layer. So this dates back to a time before we had other
> schemes like dynamic debug or tracepoints or eBPF.
> I wonder if instead of trying to enhance our own bespoke debugging
> system, instead we set up something like tracepoints where they would
> be useful. I'm not proposing that we try to replace all jbd_debug()
> statements with tracepoints but I think it would be useful to look at
> what sort of information would actually be *useful* on a production
> server, and add those tracepoints to the jbd2 layer. What I like
> about tracepoints is you can enable them on a much more fine-grained
> fashion; information is sent to userspace in a much more efficient
> manner than printk; you can filter tracepoint events in the kernel,
> before sending them to userspace; and if you want more sophisticated
> filtering or aggregation, you can use eBPF.
trace point, eBPF and other hook technologies are better for production
environments. But for pure debugging work, adding hook points feels a bit
heavy. However, your suggestion is very valuable, thank you very much.
> What was the original use case which inspired this? Were you indeed
> trying to debug some kind of problem on a production system? (Why did
> you have multiple disks active at the same time?) Was there a
> specific problem you were trying to debug? What debug level were you
> using? Which jbd_debug statements were most useful to you? Which
> just got in the way (but which had to be enabled given the log level
> you needed to get the debug messages that you needed)?
We only do this in the test environment, mainly to facilitate debugging.
We will dynamically adjust the log level, sometimes it is 1, sometimes
higher. There are two main reasons for multiple disks working at the same
time. The first is that the system management tool will update the system
disk, and the second is that the collaborative task will update other
disks. During the actual debugging, we added more additional logs. The
original logs of the system are useful, but some logs don't feel very
meaningful. Thanks.
> - Ted
>
Powered by blists - more mailing lists