lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b4aa1e985007c6d582fffe5e8435f8153e28e0f.camel@redhat.com>
Date:   Sun, 28 Jun 2020 14:13:59 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     linux-kernel@...r.kernel.org
Cc:     Alexander Viro <viro@...iv.linux.org.uk>,
        linux-fsdevel@...r.kernel.org,
        Mel Gorman <mgorman@...hsingularity.net>,
        Amir Goldstein <amir73il@...il.com>, Jan Kara <jack@...e.cz>
Subject: Commit 'fs: Do not check if there is a fsnotify watcher on pseudo
 inodes' breaks chromium here

Hi,

I just did usual kernel update and now chromium crashes on startup.
It happens both in a KVM's VM (with virtio-gpu if that matters) and natively with amdgpu driver. 
Most likely not GPU related although I initially suspected that it is.

Chromium starts as a white rectangle, shows few white rectangles
that resemble its notifications and then crashes.

The stdout output from chromium:

[mlevitsk@...rship ~]$chromium-freeworld 
mesa: for the   --simplifycfg-sink-common option: may only occur zero or one times!
mesa: for the   --global-isel-abort option: may only occur zero or one times!
[3379:3379:0628/135151.440930:ERROR:browser_switcher_service.cc(238)] XXX Init()
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a9048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a9048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
[3418:3418:0628/135151.459393:ERROR:viz_main_impl.cc(159)] Exiting GPU process due to errors during initialization
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a9048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall ../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in
syscall 0072
0072
Received signal 11 SEGV_MAPERR 0000004ae048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a9048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a9048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
#0 0x55f1e8d6e0d9 ../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004aa048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall Received signal 007211
 SEGV_MAPERR 0000004aa048
#0 0x55f1e8d6e0d9 ../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004aa048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004ad048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a8048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
Received signal 11 SEGV_MAPERR 0000004a8048
../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 0072
#0 0x55bca82dc0d9 ../../sandbox/linux/seccomp-bpf-helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall ../../sandbox/linux/seccomp-bpf-
helpers/sigsys_handlers.cc:**CRASHING**:seccomp-bpf failure in syscall 00720072

Received signal 11 SEGV_MAPERR 0000004a8048
[3379:3405:0628/135155.856801:FATAL:gpu_data_manager_impl_private.cc(439)] GPU process isn't usable. Goodbye.
#0 0x55f6da0120d9 base::debug::CollectStackTrace()
#1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
#2 0x55f6d9f85f6c logging::LogMessage::~LogMessage()
#3 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
#4 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
#5 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
#6 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
#7 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
#8 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
#9 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
#10 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
#11 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
#12 0x55f6da138968 mojo::Connector::HandleError()
#13 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
#14 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
#15 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
#16 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
#17 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
#18 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
#19 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
#20 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
#21 0x55f6da0f9b2d event_base_loop
#22 0x55f6da03316d base::MessagePumpLibevent::Run()
#23 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
#24 0x55f6d9fada7a base::RunLoop::Run()
#25 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
#26 0x55f6d9fe0cb8 base::Thread::ThreadMain()
#27 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
#28 0x7ff46642f4e2 start_thread
#29 0x7ff462e4c6a3 __GI___clone

Received signal 6
#0 0x55f6da0120d9 base::debug::CollectStackTrace()
#1 0x55f6d9f75246 base::debug::StackTrace::StackTrace()
#2 0x55f6da01170a base::debug::(anonymous namespace)::StackDumpSignalHandler()
#3 0x55f6da011cfe base::debug::(anonymous namespace)::StackDumpSignalHandler()
#4 0x7ff46643ab20 (/usr/lib64/libpthread-2.30.so+0x14b1f)
#5 0x7ff462d87625 __GI_raise
#6 0x7ff462d708d9 __GI_abort
#7 0x55f6da0112d5 base::debug::BreakDebugger()
#8 0x55f6d9f86405 logging::LogMessage::~LogMessage()
#9 0x55f6d7ed5488 content::(anonymous namespace)::IntentionallyCrashBrowserForUnusableGpuProcess()
#10 0x55f6d7ed8479 content::GpuDataManagerImplPrivate::FallBackToNextGpuMode()
#11 0x55f6d7ed4eef content::GpuDataManagerImpl::FallBackToNextGpuMode()
#12 0x55f6d7ee0f41 content::GpuProcessHost::RecordProcessCrash()
#13 0x55f6d7ee105d content::GpuProcessHost::OnProcessCrashed()
#14 0x55f6d7cbe308 content::BrowserChildProcessHostImpl::OnChildDisconnected()
#15 0x55f6da8b511a IPC::ChannelMojo::OnPipeError()
#16 0x55f6da13cd62 mojo::InterfaceEndpointClient::NotifyError()
#17 0x55f6da8c1f9d IPC::(anonymous namespace)::ChannelAssociatedGroupController::OnPipeError()
#18 0x55f6da138968 mojo::Connector::HandleError()
#19 0x55f6da15bce7 mojo::SimpleWatcher::OnHandleReady()
#20 0x55f6da15c0fb mojo::SimpleWatcher::Context::CallNotify()
#21 0x55f6d78eaa73 mojo::core::WatcherDispatcher::InvokeWatchCallback()
#22 0x55f6d78ea38f mojo::core::Watch::InvokeCallback()
#23 0x55f6d78e6efa mojo::core::RequestContext::~RequestContext()
#24 0x55f6d78db76a mojo::core::NodeChannel::OnChannelError()
#25 0x55f6d78f232a mojo::core::(anonymous namespace)::ChannelPosix::OnFileCanReadWithoutBlocking()
#26 0x55f6da03345e base::MessagePumpLibevent::OnLibeventNotification()
#27 0x55f6da0f9b2d event_base_loop
#28 0x55f6da03316d base::MessagePumpLibevent::Run()
#29 0x55f6d9fd79c9 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
#30 0x55f6d9fada7a base::RunLoop::Run()
#31 0x55f6d7ce6324 content::BrowserProcessSubThread::IOThreadRun()
#32 0x55f6d9fe0cb8 base::Thread::ThreadMain()
#33 0x55f6da024705 base::(anonymous namespace)::ThreadFunc()
#34 0x7ff46642f4e2 start_thread
#35 0x7ff462e4c6a3 __GI___clone
  r8: 0000000000000000  r9: 00007ff44e6a58d0 r10: 0000000000000008 r11: 0000000000000246
 r12: 00007ff44e6a6b40 r13: 00007ff44e6a6d00 r14: 000000000000006d r15: 00007ff44e6a6b30
  di: 0000000000000002  si: 00007ff44e6a58d0  bp: 00007ff44e6a5b20  bx: 00007ff44e6a9700
  dx: 0000000000000000  ax: 0000000000000000  cx: 00007ff462d87625  sp: 00007ff44e6a58d0
  ip: 00007ff462d87625 efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
 trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.


I am using fedora 31 in native mode, and fedora 32 in VM. In the vm I also tried the stock
chromium package (instead of rpmfusion fork that I usually use), same results.

The version of chromium I use natively is:

[mlevitsk@...rship ~/UPSTREAM/linux-kernel/src]$chromium-freeworld --version
Chromium 81.0.4044.138 

The dmesg shows segfaults, which I don't think add much:

[   28.571093] Chrome_ChildIOT[3524]: segfault at 4ae048 ip 000055f1ea13cf6f sp 00007f19dd895d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.571107] Chrome_ChildIOT[3523]: segfault at 4ae048 ip 000055f1ea13cf6f sp 00007f19dd895d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.571158] Code: Bad RIP value.
[   28.571210] Code: Bad RIP value.
[   28.598335] Chrome_ChildIOT[3560]: segfault at 4ae048 ip 000055f1ea13cf6f sp 00007f19dd895d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.598399] Code: Bad RIP value.
[   28.613524] ThreadPoolServi[3577]: segfault at 4a9048 ip 000055f1ea13cf6f sp 00007f19de897d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.613591] Code: Bad RIP value.
[   28.618717] Chrome_ChildIOT[3592]: segfault at 4ae048 ip 000055f1ea13cf6f sp 00007f19dd895d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.618775] Code: Bad RIP value.
[   28.620581] Chrome_ChildIOT[3597]: segfault at 4ae048 ip 000055f1ea13cf6f sp 00007f19dd895d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.620640] Code: Bad RIP value.
[   28.772365] Chrome_ChildIOT[3649]: segfault at 4af048 ip 00005637ee889f6f sp 00007f1498ffdd30 error 6 in chromium-freeworld[5637e8d6b000+bf68000]
[   28.772446] Code: Bad RIP value.
[   28.993623] ThreadPoolServi[3654]: segfault at 4ae048 ip 000055f1ea13cf6f sp 00007f19de897d30 error 6 in chromium-freeworld[55f1e461e000+bf68000]
[   28.993700] Code: Bad RIP value.
[   29.664413] Chrome_ChildIOT[3700]: segfault at 4af048 ip 00005555ff5fbf6f sp 00007fd36c967d30 error 6 in chromium-freeworld[5555f9add000+bf68000]
[   29.664478] Code: Bad RIP value.
[   30.392091] Chrome_ChildIOT[3797]: segfault at 4af048 ip 000056226605df6f sp 00007f93cb4bbd30 error 6 in chromium-freeworld[56226053f000+bf68000]
[   30.392158] Code: Bad RIP value.
[   30.972575] Chrome_ChildIOT[3811]: segfault at 4ad048 ip 00005574aa821f6f sp 00007f78b3b29d30 error 6 in chromium-freeworld[5574a4d03000+bf68000]
[   30.972636] Code: Bad RIP value.
[   31.500368] Chrome_ChildIOT[3819]: segfault at 4ad048 ip 000055bca96aaf6f sp 00007f7680d98d30 error 6 in chromium-freeworld[55bca3b8c000+bf68000]
[   31.500435] Code: Bad RIP value.
[   32.503029] Chrome_ChildIOT[3829]: segfault at 4ad048 ip 0000563005cc3f6f sp 00007fca862b3d30 error 6 in chromium-freeworld[5630001a5000+bf68000]
[   32.503093] Code: Bad RIP value.


The kernel is custom compiled. My .config is attached.

The git bisect log:

git bisect start
# good: [8be3a53e18e0e1a98f288f6c7f5e9da3adbe9c49] Merge tag 'erofs-for-5.8-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs
git bisect good 8be3a53e18e0e1a98f288f6c7f5e9da3adbe9c49
# bad: [719fdd32921fb7e3208db8832d32ae1c2d68900f] afs: Fix storage of cell names
git bisect bad 719fdd32921fb7e3208db8832d32ae1c2d68900f
# bad: [4a21185cda0fbb860580eeeb4f1a70a9cda332a4] Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
git bisect bad 4a21185cda0fbb860580eeeb4f1a70a9cda332a4
# good: [41b14fb8724d5a4b382a63cb4a1a61880347ccb8] net: Do not clear the sock TX queue in sk_set_socket()
git bisect good 41b14fb8724d5a4b382a63cb4a1a61880347ccb8
# good: [87d93e9a91c76bcb45112d863ef72aab41e01879] Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
git bisect good 87d93e9a91c76bcb45112d863ef72aab41e01879
# good: [e39109f59614e5646e6c53100a3e7e8f63dd1d2b] net: dsa: sja1105: move sja1105_compose_gating_subschedule at the top
git bisect good e39109f59614e5646e6c53100a3e7e8f63dd1d2b
# good: [0e00c05fa72554c86d7c7e0f538ec83bfe277c91] Merge branch 'napi_gro_receive-caller-return-value-cleanups'
git bisect good 0e00c05fa72554c86d7c7e0f538ec83bfe277c91
# bad: [42e9c85f5c7296c4ec02644a2b3debc7120e2bf4] Merge tag 'trace-v5.8-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
git bisect bad 42e9c85f5c7296c4ec02644a2b3debc7120e2bf4
# good: [6784beada631800f2c5afd567e5628c843362cee] tracing: Fix event trigger to accept redundant spaces
git bisect good 6784beada631800f2c5afd567e5628c843362cee
# bad: [52366a107bf0600cf366f5ff3ea1f147b285e41f] Merge tag 'fsnotify_for_v5.8-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
git bisect bad 52366a107bf0600cf366f5ff3ea1f147b285e41f
# bad: [e9c15badbb7b20ccdbadf5da14e0a68fbad51015] fs: Do not check if there is a fsnotify watcher on pseudo inodes
git bisect bad e9c15badbb7b20ccdbadf5da14e0a68fbad51015
# first bad commit: [e9c15badbb7b20ccdbadf5da14e0a68fbad51015] fs: Do not check if there is a fsnotify watcher on pseudo inodes


Reverting this commit fixed the issue on both native and in VM.

Best regards,
	Maxim Levitsky



View attachment ".config" of type "text/plain" (142891 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ