lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <bug-217965-13602@https.bugzilla.kernel.org/>
Date:   Mon, 02 Oct 2023 08:10:21 +0000
From:   bugzilla-daemon@...nel.org
To:     linux-ext4@...r.kernel.org
Subject: [Bug 217965] New: ext4(?) regression since 6.5.0 on sata hdd

https://bugzilla.kernel.org/show_bug.cgi?id=217965

            Bug ID: 217965
           Summary: ext4(?) regression since 6.5.0 on sata hdd
           Product: File System
           Version: 2.5
          Hardware: All
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P3
         Component: ext4
          Assignee: fs_ext4@...nel-bugs.osdl.org
          Reporter: iivanich@...il.com
        Regression: No

Since kernel 6.5.x and 6.6-rc* I'm getting weird kworker flush activity when
building openwrt from sources.
91 root      20   0       0      0      0 R  99,7   0,0  18:06.57
kworker/u16:4+flush-8:16

Openwrt sources resides on the sata hdd drive with ext4 fs,I'm using this setup
for a last 5 years, the problem is that since 6.5 kernels after the openwrt
kernel patch make
step(https://git.openwrt.org/?p=openwrt/openwrt.git;a=blob;f=scripts/patch-kernel.sh;h=c2b7e7204952f93946a6075d546cbeae32c2627f;hb=HEAD
which probably involves a lot of copy and write operations)
kworker/u16:4+flush-8:16 uses 100% of one core for a while(5-15 minutes) even
after I canceling openwrt build.

I tried to move this openwrt sources folder to an ssd drive where my system is
resides and run openwrt build from there and getting no issues with kworker
flush  cpu usage. Also I have no such behavior with 6.4.x and older kernels so
it looks like regression to me, not sure if this is a fs, vfs or even block
subsystem issue.

This is how it looks in perf
Samples: 320K of event 'cycles:P', Event count (approx.): 363448649248
  Children      Self  Command          Shared Object                       
Symbol
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ret_from_fork_asm
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ret_from_fork
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
kthread
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
worker_thread
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
process_one_work
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
wb_workfn
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
wb_writeback
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
__writeback_inodes_wb
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
writeback_sb_inodes
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
__writeback_single_inode
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
do_writepages
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_writepages
+   12,40%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_do_writepages
+   12,39%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_map_blocks
+   12,39%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_ext_map_blocks
+   12,38%     0,00%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_mb_new_blocks
+   12,38%     0,93%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_mb_regular_allocator
+    9,42%     0,00%  cc1              [unknown]                            [.]
0000000000000000
+    5,42%     0,53%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_mb_scan_aligned
+    4,88%     0,69%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
mb_find_extent
+    3,99%     3,95%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
mb_find_order_for_block
+    3,51%     0,61%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_mb_load_buddy_gfp
+    2,95%     0,01%  cc1              [kernel.vmlinux]                     [k]
asm_exc_page_fault
+    2,67%     0,18%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
pagecache_get_page
+    2,41%     0,40%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
__filemap_get_folio
+    2,33%     2,10%  cc1              cc1                                  [.]
cpp_get_token_1
+    2,12%     0,05%  cc1              [kernel.vmlinux]                     [k]
exc_page_fault
+    2,07%     0,04%  cc1              [kernel.vmlinux]                     [k]
do_user_addr_fault
+    1,81%     0,52%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
filemap_get_entry
     1,80%     1,71%  cc1              cc1                                  [.]
ht_lookup_with_hash
+    1,77%     0,08%  cc1              [kernel.vmlinux]                     [k]
handle_mm_fault
+    1,65%     0,14%  cc1              [kernel.vmlinux]                     [k]
__handle_mm_fault
     1,60%     1,49%  cc1              cc1                                  [.]
_cpp_lex_direct
+    1,54%     0,73%  kworker/u16:2+f  [kernel.vmlinux]                     [k]
ext4_mb_good_group
+    1,49%     1,46%  cc1              cc1                                  [.]
ggc_internal_alloc
+    1,28%     0,05%  cc1              [kernel.vmlinux]                     [k]
do_anonymous_page
+    1,28%     0,04%  cc1              [kernel.vmlinux]                     [k]
entry_SYSCALL_64

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ