lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Dec 2019 02:27:55 +0000
From:   Likai <li.kai4@....com>
To:     Changwei Ge <chge@...ux.alibaba.com>,
        "mark@...heh.com" <mark@...heh.com>,
        "jlbec@...lplan.org" <jlbec@...lplan.org>,
        "joseph.qi@...ux.alibaba.com" <joseph.qi@...ux.alibaba.com>
CC:     "ocfs2-devel@....oracle.com" <ocfs2-devel@....oracle.com>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] ocfs2: call journal flush to mark journal as empty after
 journal recovery when mount

Hi Changwei,

thanks for your comments, I will modify the log info inline.

thanks

On 2019/12/11 20:48, Changwei Ge wrote:
> Hi Kai,
>
> Now the problem is more clear to me.
>
> Just a few comments inline.
>
> On 12/11/19 6:03 PM, Kai Li wrote:
>> If journal is dirty when mount, it will be replayed but jbd2 sb
>> log tail cannot be updated to mark a new start because
>> journal->j_flag has already been set with JBD2_ABORT first
>> in journal_init_common. When a new transaction is committed, it
>> will be recored in block 1 first(journal->j_tail is set to 1 in
>> journal_reset).
>>
>> If emergency restart happens again before journal super block is
>> updated unfortunately, the new recorded trans will not be replayed
>> in the next mount.
> So this issue happens before the first committing log iteration  comes 
> (running in jbd2 kernel thread), right?
> This leads jbd2 losing the chance to update *on-disk* journal super 
> block but the dynamic status of journal is still remaining in memory.
>
> Yes, I agree. We should update on-disk journal super block after journal 
> recovery since that procedure doesn't involve journal status in memory 
> which is just reset to the very beginning(tail and head pointing to the 
> first block) of journal region.
>
> Moreover, during journal recovery running by peer nodes, ocfs2 already 
> does that like what this patch does.
>
> ocfs2_recover_node() -> ocfs2_replay_journal().
>
> Very good job to catch such an exception. :-)
>
>
>> This exception happens when this lun is used by only one node. If it
>> is used by multi-nodes, other node will replay its journal and its
>> journal sb block will be updated after recovery.
>>
>> To fix this problem, use jbd2_journal_flush to mark journal as empty as
>> ocfs2_replay_journal has done.
>>
>> The following jbd2 journal can be generated by touching a new file after
>> journal is replayed, and seq 15 is the first valid commit, but first seq
>> is 13 in journal super block.
>> logdump:
>> Block 0: Journal Superblock
>> Seq: 0   Type: 4 (JBD2_SUPERBLOCK_V2)
>> Blocksize: 4096   Total Blocks: 32768   First Block: 1
>> First Commit ID: 13   Start Log Blknum: 1
>> Error: 0
>> Feature Compat: 0
>> Feature Incompat: 2 block64
>> Feature RO compat: 0
>> Journal UUID: 4ED3822C54294467A4F8E87D2BA4BC36
>> FS Share Cnt: 1   Dynamic Superblk Blknum: 0
>> Per Txn Block Limit    Journal: 0    Data: 0
>>
>> Block 1: Journal Commit Block
>> Seq: 14   Type: 2 (JBD2_COMMIT_BLOCK)
>>
>> Block 2: Journal Descriptor
>> Seq: 15   Type: 1 (JBD2_DESCRIPTOR_BLOCK)
>> No. Blocknum        Flags
>>   0. 587             none
>> UUID: 00000000000000000000000000000000
>>   1. 8257792         JBD2_FLAG_SAME_UUID
>>   2. 619             JBD2_FLAG_SAME_UUID
>>   3. 24772864        JBD2_FLAG_SAME_UUID
>>   4. 8257802         JBD2_FLAG_SAME_UUID
>>   5. 513             JBD2_FLAG_SAME_UUID JBD2_FLAG_LAST_TAG
>> ...
>> Block 7: Inode
>> Inode: 8257802   Mode: 0640   Generation: 57157641 (0x3682809)
>> FS Generation: 2839773110 (0xa9437fb6)
>> CRC32: 00000000   ECC: 0000
>> Type: Regular   Attr: 0x0   Flags: Valid
>> Dynamic Features: (0x1) InlineData
>> User: 0 (root)   Group: 0 (root)   Size: 7
>> Links: 1   Clusters: 0
>> ctime: 0x5de5d870 0x11104c61 -- Tue Dec  3 11:37:20.286280801 2019
>> atime: 0x5de5d870 0x113181a1 -- Tue Dec  3 11:37:20.288457121 2019
>> mtime: 0x5de5d870 0x11104c61 -- Tue Dec  3 11:37:20.286280801 2019
>> dtime: 0x0 -- Thu Jan  1 08:00:00 1970
>> ...
>> Block 9: Journal Commit Block
>> Seq: 15   Type: 2 (JBD2_COMMIT_BLOCK)
>>
>> The following is jouranl recovery log when recovering the upper jbd2
>> journal when mount again.
>> syslog:
>> [ 2265.648622] ocfs2: File system on device (252,1) was not unmounted cleanly, recovering it.
>> [ 2265.649695] fs/jbd2/recovery.c:(do_one_pass, 449): Starting recovery pass 0
>> [ 2265.650407] fs/jbd2/recovery.c:(do_one_pass, 449): Starting recovery pass 1
>> [ 2265.650409] fs/jbd2/recovery.c:(do_one_pass, 449): Starting recovery pass 2
>> [ 2265.650410] fs/jbd2/recovery.c:(jbd2_journal_recover, 278): JBD2: recovery, exit status 0, recovered transactions 13 to 13
>>
>> Due to first commit seq 13 recorded in journal super is not consistent
>> with the value recorded in block 1(seq is 14), journal recovery will be
>> terminated before seq 15 even though it is an unbroken commit, inode
>> 8257802 is a new file and it will be lost.
>>
>> Signed-off-by: Kai Li <li.kai4@....com>
>> ---
>>   fs/ocfs2/journal.c | 8 ++++++++
>>   1 file changed, 8 insertions(+)
>>
>> diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
>> index 1afe57f425a0..b8b9d26fa731 100644
>> --- a/fs/ocfs2/journal.c
>> +++ b/fs/ocfs2/journal.c
>> @@ -1066,6 +1066,14 @@ int ocfs2_journal_load(struct ocfs2_journal *journal, int local, int replayed)
>>   
>>   	ocfs2_clear_journal_error(osb->sb, journal->j_journal, osb->slot_num);
>>   
>> +	if (replayed) {
>> +		/* wipe the journal */
mlog( ML_NOTICE, "journal recovery complete" );
Its reason is that if journal is dirty we will print a notice "File
system on device (*) was not unmounted cleanly, recovering it" before,
here it is done.
>> +		jbd2_journal_lock_updates(journal->j_journal);
>> +		status = jbd2_journal_flush(journal->j_journal);
>> +		jbd2_journal_unlock_updates(journal->j_journal);
if (status < 0)
    mlog_errno(status);
if it fails, it may cause file lost in future.
>> +		mlog(ML_NOTICE, "journal recovery complete, status=%d", status);
> Is the above mlog line necessary?
> Can we just log a warning only jbd2_journal_flush() fails and let the 
> mount continue? IMO, the mlog line can't help much.
>
> Otherwise, this patch looks good to me.
>
>
> Thanks,
> Changwei
>
>
>> +	}
>> +
>>   	status = ocfs2_journal_toggle_dirty(osb, 1, replayed);
>>   	if (status < 0) {
>>   		mlog_errno(status);
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ