[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8864bb0-3d76-20d5-8a25-aab9726354f2@kernel.org>
Date: Fri, 30 Jun 2023 10:43:02 +0200
From: Jiri Slaby <jirislaby@...nel.org>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, michel@...pinasse.org,
jglisse@...gle.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, mgorman@...hsingularity.net, dave@...olabs.net,
willy@...radead.org, liam.howlett@...cle.com, peterz@...radead.org,
ldufour@...ux.ibm.com, paulmck@...nel.org, mingo@...hat.com,
will@...nel.org, luto@...nel.org, songliubraving@...com,
peterx@...hat.com, david@...hat.com, dhowells@...hat.com,
hughd@...gle.com, bigeasy@...utronix.de, kent.overstreet@...ux.dev,
punit.agrawal@...edance.com, lstoakes@...il.com,
peterjung1337@...il.com, rientjes@...gle.com, chriscli@...gle.com,
axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
gurua@...gle.com, arjunroy@...gle.com, soheil@...gle.com,
leewalsh@...gle.com, posk@...gle.com,
michalechner92@...glemail.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v4 29/33] x86/mm: try VMA lock-based page fault handling
first
On 30. 06. 23, 10:28, Jiri Slaby wrote:
> > 2348
> clone3({flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, child_tid=0x7fcaa5882990, parent_tid=0x7fcaa5882990, exit_signal=0, stack=0x7fcaa5082000, stack_size=0x7ffe00, tls=0x7fcaa58826c0} => {parent_tid=[2351]}, 88) = 2351
> > 2350 <... clone3 resumed> => {parent_tid=[2372]}, 88) = 2372
> > 2351 <... clone3 resumed> => {parent_tid=[2354]}, 88) = 2354
> > 2351 <... clone3 resumed> => {parent_tid=[2357]}, 88) = 2357
> > 2354 <... clone3 resumed> => {parent_tid=[2355]}, 88) = 2355
> > 2355 <... clone3 resumed> => {parent_tid=[2370]}, 88) = 2370
> > 2370 mmap(NULL, 262144, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 <unfinished ...>
> > 2370 <... mmap resumed>) = 0x7fca68249000
> > 2372 <... clone3 resumed> => {parent_tid=[2384]}, 88) = 2384
> > 2384 <... clone3 resumed> => {parent_tid=[2388]}, 88) = 2388
> > 2388 <... clone3 resumed> => {parent_tid=[2392]}, 88) = 2392
> > 2392 <... clone3 resumed> => {parent_tid=[2395]}, 88) = 2395
> > 2395 write(2, "runtime: marked free object in s"..., 36 <unfinished
> ...>
>
> I.e. IIUC, all are threads (CLONE_VM) and thread 2370 mapped ANON
> 0x7fca68249000 - 0x7fca6827ffff and go in thread 2395 thinks for some
> reason 0x7fca6824bec8 in that region is "bad".
As I was noticed, this might be as well be a fail of the go's
inter-thread communication (or alike) too. It might now be only more
exposed with vma-based locks as we can do more parallelism now.
There are older hard to reproduce bugs in go with similar symptoms (we
see this error sometimes now too):
https://github.com/golang/go/issues/15246
Or this 2016 bug is a red herring. Hard to tell...
>> thanks,
--
js
suse labs
Powered by blists - more mailing lists