[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFd1-hH=gmzyosZiebp8X=F9h-jTt1dXiMpZovsL8O=rQ@mail.gmail.com>
Date: Wed, 13 Nov 2024 11:05:36 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>, Suren Baghdasaryan <surenb@...gle.com>,
Matthew Wilcox <willy@...radead.org>, Vlastimil Babka <vbabka@...e.cz>, akpm@...ux-foundation.org,
lorenzo.stoakes@...cle.com, mhocko@...e.com, hannes@...xchg.org,
mjguzik@...il.com, oliver.sang@...el.com, mgorman@...hsingularity.net,
david@...hat.com, peterx@...hat.com, oleg@...hat.com, dave@...olabs.net,
paulmck@...nel.org, brauner@...nel.org, dhowells@...hat.com, hdanton@...a.com,
hughd@...gle.com, minchan@...gle.com, jannh@...gle.com,
shakeel.butt@...ux.dev, souravpanda@...gle.com, pasha.tatashin@...een.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v2 4/5] mm: make vma cache SLAB_TYPESAFE_BY_RCU
On Wed, Nov 13, 2024 at 7:47 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Wed, Nov 13, 2024 at 7:29 AM Liam R. Howlett <Liam.Howlett@...cle.com> wrote:
> >
> > * Suren Baghdasaryan <surenb@...gle.com> [241113 10:25]:
> > > On Wed, Nov 13, 2024 at 7:23 AM 'Liam R. Howlett' via kernel-team
> > > <kernel-team@...roid.com> wrote:
> > > >
> > > > * Matthew Wilcox <willy@...radead.org> [241113 08:57]:
> > > > > On Wed, Nov 13, 2024 at 07:38:02AM -0500, Liam R. Howlett wrote:
> > > > > > > Hi, I was wondering if we actually need the detached flag. Couldn't
> > > > > > > "detached" simply mean vma->vm_mm == NULL and we save 4 bytes? Do we ever
> > > > > > > need a vma that's detached but still has a mm pointer? I'd hope the places
> > > > > > > that set detached to false have the mm pointer around so it's not inconvenient.
> > > > > >
> > > > > > I think the gate vmas ruin this plan.
> > > > >
> > > > > But the gate VMAs aren't to be found in the VMA tree. Used to be that
> > > > > was because the VMA tree was the injective RB tree and so VMAs could
> > > > > only be in one tree at a time. We could change that now!
> > > >
> > > > \o/
> > > >
> > > > >
> > > > > Anyway, we could use (void *)1 instead of NULL to indicate a "detached"
> > > > > VMA if we need to distinguish between a detached VMA and a gate VMA.
> > > >
> > > > I was thinking a pointer to itself vma->vm_mm = vma, then a check for
> > > > this, instead of null like we do today.
> > >
> > > The motivation for having a separate detached flag was that vma->vm_mm
> > > is used when read/write locking the vma, so it has to stay valid even
> > > when vma gets detached. Maybe we can be more cautious in
> > > vma_start_read()/vma_start_write() about it but I don't recall if
> > > those were the only places that was an issue.
> >
> > We have the mm form the callers though, so it could be passed in?
>
> Let me try and see if something else blows up. When I was implementing
> per-vma locks I thought about using vma->vm_mm to indicate detached
> state but there were some issues that caused me reconsider.
Yeah, a quick change reveals the first mine explosion:
[ 2.838900] BUG: kernel NULL pointer dereference, address: 0000000000000480
[ 2.840671] #PF: supervisor read access in kernel mode
[ 2.841958] #PF: error_code(0x0000) - not-present page
[ 2.843248] PGD 800000010835a067 P4D 800000010835a067 PUD 10835b067 PMD 0
[ 2.844920] Oops: Oops: 0000 [#1] PREEMPT SMP PTI
[ 2.846078] CPU: 2 UID: 0 PID: 1 Comm: init Not tainted
6.12.0-rc6-00258-ga587fcd91b06-dirty #111
[ 2.848277] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 2.850673] RIP: 0010:unmap_vmas+0x84/0x190
[ 2.851717] Code: 00 00 00 00 48 c7 44 24 48 00 00 00 00 48 c7 44
24 18 00 00 00 00 48 89 44 24 28 4c 89 44 24 38 e8 b1 c0 d1 00 48 8b
44 24 28 <48> 83 b8 80 04 00 00 00 0f 85 dd 00 00 00 45 0f b6 ed 49 83
ec 01
[ 2.856287] RSP: 0000:ffffa298c0017a18 EFLAGS: 00010246
[ 2.857599] RAX: 0000000000000000 RBX: 00007f48ccbb4000 RCX: 00007f48ccbb4000
[ 2.859382] RDX: ffff8918c26401e0 RSI: ffffa298c0017b98 RDI: ffffa298c0017ab0
[ 2.861156] RBP: 00007f48ccdb6000 R08: 00007f48ccdb6000 R09: 0000000000000001
[ 2.862941] R10: 0000000000000040 R11: ffff8918c2637108 R12: 0000000000000001
[ 2.864719] R13: 0000000000000001 R14: ffff8918c26401e0 R15: ffffa298c0017b98
[ 2.866472] FS: 0000000000000000(0000) GS:ffff8927bf080000(0000)
knlGS:0000000000000000
[ 2.868439] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2.869877] CR2: 0000000000000480 CR3: 000000010263e000 CR4: 0000000000750ef0
[ 2.871661] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 2.873419] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 2.875185] PKRU: 55555554
[ 2.875871] Call Trace:
[ 2.876503] <TASK>
[ 2.877047] ? __die+0x1e/0x60
[ 2.877824] ? page_fault_oops+0x17b/0x4a0
[ 2.878857] ? exc_page_fault+0x6b/0x150
[ 2.879841] ? asm_exc_page_fault+0x26/0x30
[ 2.880886] ? unmap_vmas+0x84/0x190
[ 2.881783] ? unmap_vmas+0x7f/0x190
[ 2.882680] vms_clear_ptes+0x106/0x160
[ 2.883621] vms_complete_munmap_vmas+0x53/0x170
[ 2.884762] do_vmi_align_munmap+0x15e/0x1d0
[ 2.885838] do_vmi_munmap+0xcb/0x160
[ 2.886760] __vm_munmap+0xa4/0x150
[ 2.887637] elf_load+0x1c4/0x250
[ 2.888473] load_elf_binary+0xabb/0x1680
[ 2.889476] ? __kernel_read+0x111/0x320
[ 2.890458] ? load_misc_binary+0x1bc/0x2c0
[ 2.891510] bprm_execve+0x23e/0x5e0
[ 2.892408] kernel_execve+0xf3/0x140
[ 2.893331] ? __pfx_kernel_init+0x10/0x10
[ 2.894356] kernel_init+0xe5/0x1c0
[ 2.895241] ret_from_fork+0x2c/0x50
[ 2.896141] ? __pfx_kernel_init+0x10/0x10
[ 2.897164] ret_from_fork_asm+0x1a/0x30
[ 2.898148] </TASK>
Looks like we are detaching VMAs and then unmapping them, where
vms_clear_ptes() uses vms->vma->vm_mm. I'll try to clean up this and
other paths and will see how many changes are required to make this
work.
>
> >
> > >
> > > >
> > > > Either way, we should make it a function so it's easier to reuse for
> > > > whatever we need in the future, wdyt?
> > > >
> > > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@...roid.com.
> > > >
Powered by blists - more mailing lists