lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <632c1ef5-a0cb-30e2-4d1c-08e6463d6cda@gusev.co>
Date:   Tue, 30 Jul 2019 00:12:33 +0300
From:   Dmitrij Gusev <dmitrij@...ev.co>
To:     "Theodore Y. Ts'o" <tytso@....edu>
Cc:     "'linux-ext4@...r.kernel.org'" <linux-ext4@...r.kernel.org>
Subject: Re: ext4 file system is constantly writing to the block device with
 no activity from the applications, is it a bug?

Hello.

Yes, this is the lazy inode table initialization.

After I remounted the FS with a "noinit_itable" option the activity has 
stopped (then I remounted it back to continue initialization).

Appreciate your help.

P.S.

Thank you, Theodore and the development team involved for the great ext4 
filesystem.

It is a reliable, high-performance file system. It always served me well 
for many years and continues to do so.

I had never lost any data using it, even though some systems experienced 
many crashes or power losses.

Sincerely,

Dmitrij

On 2019-07-29 15:55, Theodore Y. Ts'o wrote:
> On Mon, Jul 29, 2019 at 02:18:22PM +0300, Dmitrij Gusev wrote:
>> A ext4 file system is constantly writing to the block device with no
>> activity from the applications, is it a bug?
>>
>> Write speed is about 64k bytes (almost always exactly 64k bytes) per second
>> every 1-2 seconds (I've discovered it after a RAID sync finished). Please
>> the check activity log sample below.
> Is this a freshly created file system?  It could be the lazy inode
> table initialization.  You can suppress it using "mount -o
> noinit_itable", but it will leave portions of the inode table
> unzeroed, which can lead to confusion if the system crashes and e2fsck
> has to try to recover the file system.
>
> Or you can not enable lazy inode table initialization when the file
> system is created, using "mke2fs -t ext4 -E lazy_itable_init=0
> /dev/XXX".  (See the manual page for mke2fs.conf for another way to
> turn it off by default.)
>
> Turning off lazy inode table initialization mke2fs to take **much**
> longer, especially on large RAID arrays.  The idea is to trade off
> mkfs time with background activity to initialize the inode table when
> the file system is mounted.  The noinit_itable mount option was added
> so that a distro installer can temporarily suppress the background
> inode table initialization to speed up the install; but then when the
> system is booted, it can run in the background later.
>
>
> If that's not it, try installing the blktrace package and then run
> "btrace /dev/<vg>/home", and see what it reports.  For example, here's
> the output from running "touch /mnt/test" (comments prefixed by '#'):
>
> # here's the touch process reading the inode...
> 259,0    2        1    37.115679608  6646  Q  RM 4232 + 8 [touch]
> 259,0    2        2    37.115682891  6646  C  RM 4232 + 8 [0]
> # here's the journal commit, 5 seconds later
> 259,0    1       11    42.543705759  6570  Q  WS 3932216 + 8 [jbd2/pmem0-8]
> 259,0    1       12    42.543709184  6570  C  WS 3932216 + 8 [0]
> 259,0    1       13    42.543713049  6570  Q  WS 3932224 + 8 [jbd2/pmem0-8]
> 259,0    1       14    42.543714248  6570  C  WS 3932224 + 8 [0]
> 259,0    1       15    42.543717049  6570  Q  WS 3932232 + 8 [jbd2/pmem0-8]
> 259,0    1       16    42.543718193  6570  C  WS 3932232 + 8 [0]
> 259,0    1       17    42.543720895  6570  Q  WS 3932240 + 8 [jbd2/pmem0-8]
> 259,0    1       18    42.543722028  6570  C  WS 3932240 + 8 [0]
> 259,0    1       19    42.543724806  6570  Q  WS 3932248 + 8 [jbd2/pmem0-8]
> 259,0    1       20    42.543725952  6570  C  WS 3932248 + 8 [0]
> 259,0    1       21    42.543728697  6570  Q  WS 3932256 + 8 [jbd2/pmem0-8]
> 259,0    1       22    42.543729799  6570  C  WS 3932256 + 8 [0]
> 259,0    1       23    42.543745380  6570  Q FWFS 3932264 + 8 [jbd2/pmem0-8]
> 259,0    1       24    42.543746836  6570  C FWFS 3932264 + 8 [0]
> # and here's the writeback to the inode table and superblock,
> # 30 seconds later
> 259,0    1       25    72.836967205    91  Q   W 0 + 8 [kworker/u8:3]
> 259,0    1       26    72.836970861    91  C   W 0 + 8 [0]
> 259,0    1       27    72.836984218    91  Q  WM 8 + 8 [kworker/u8:3]
> 259,0    1       28    72.836985929    91  C  WM 8 + 8 [0]
> 259,0    1       29    72.836992108    91  Q  WM 4232 + 8 [kworker/u8:3]
> 259,0    1       30    72.836993953    91  C  WM 4232 + 8 [0]
> 259,0    1       31    72.837001370    91  Q  WM 4360 + 8 [kworker/u8:3]
> 259,0    1       32    72.837003210    91  C  WM 4360 + 8 [0]
> 259,0    1       33    72.837010993    91  Q  WM 69896 + 8 [kworker/u8:3]
> 259,0    1       34    72.837012564    91  C  WM 69896 + 8 [0]
>
> Cheers,
>
> 						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ