[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87blu18gkn.fsf@dja-thinkpad.axtens.net>
Date: Mon, 28 Oct 2019 18:39:04 +1100
From: Daniel Axtens <dja@...ens.net>
To: Andrey Ryabinin <aryabinin@...tuozzo.com>,
Mark Rutland <mark.rutland@....com>
Cc: kasan-dev@...glegroups.com, linux-mm@...ck.org, x86@...nel.org,
glider@...gle.com, luto@...nel.org, linux-kernel@...r.kernel.org,
dvyukov@...gle.com, christophe.leroy@....fr,
linuxppc-dev@...ts.ozlabs.org, gor@...ux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
> Or let me put it this way. Let's assume that CPU0 accesses shadow and CPU1 did the memset() and installed pte.
> CPU0 may not observe memset() only if it dereferences completely random vmalloc addresses
> or it performs out-of-bounds access which crosses KASAN_SHADOW_SCALE*PAGE_SIZE boundary, i.e. access to shadow crosses page boundary.
> In both cases it will be hard to avoid crashes. OOB crossing the page boundary in vmalloc pretty much guarantees crash because of guard page,
> and derefencing random address isn't going to last for long.
>
> If CPU0 obtained pointer via vmalloc() call and it's doing out-of-bounds (within boundaries of the page) or use-after-free,
> than the spin_[un]lock(&init_mm.page_table_lock) should allow CPU0 to see the memset done by CPU1 without any additional barrier.
I have puzzled through the barrier stuff. Here's what I
have. Apologies for the length, and for any mistakes - I'm pretty new
to deep kernel memory model stuff!
One thing that I don't think we've considered so far is _un_poisioning:
| ret = apply_to_page_range(&init_mm, shadow_start,
| shadow_end - shadow_start,
| kasan_populate_vmalloc_pte, NULL);
| if (ret)
| return ret;
|
| kasan_unpoison_shadow(area->addr, requested_size);
That unpoisioning is going to write to the shadow via its virtual
address, loading translations into the TLB. So we cannot assume that
another CPU is doing the page table walk and loading the TLB entry for
the first time. We need to make sure that correctness does not depend
on that.
We have 2x2 cases to consider:
{Access via fixed address, access via unknown address}
x
{Access within object - unpoisioned, access just beyond object but
within shadow - poisoned}
I think we can first drop all consideration of access via fixed
addresses. Such accesses will have to be synchronised via some
external mechanism, such as a flag, with appropriate
locking/barriers. Those barriers will order the rest of the memory
accesses within vmalloc(), and I considered speculative faults in my
other email.
That leaves just memory accesses via an unknown address. I'm imagining
the following two cases:
[Access of Unpoisoned Shadow - valid access]
CPU#0 CPU#1
----- -----
WRITE_ONCE(p, vmalloc(100)) while (!(x = READ_ONCE(p))) ;
x[99] = 1;
[Access of Poisoned Shadow - invalid read past the end]
CPU#0 CPU#1
----- -----
WRITE_ONCE(p, vmalloc(100)) while (!(x = READ_ONCE(p))) ;
x[100] = 1;
---------- Access to the unpoisioned region of shadow ----------
Expanding the CPU#0 side, let `a` be area->addr:
// kasan_populate_vmalloc_pte
...
STORE page+PAGE_SIZE-1, poison
// Mark's proposed smp_wmb() goes here
ACQUIRE page_table_lock
STORE ptep, pte
RELEASE page_table_lock
// return to kasan_populate_vmalloc
// call kasan_unpoison_shadow(a, 100)
STORE shadow(a), unpoison
...
STORE shadow(a+99), unpoison
// rest of vmalloc()
STORE p, a
CPU#1 looks like (removing the loop bit):
x = LOAD p
<data dependency>
shadow_x = LOAD *shadow(x+99)
// if shadow_x poisoned, report
STORE (x+99), 1
Putting the last few operations side-by-side:
CPU#0 CPU#1
STORE shadow(a+99), unpoision x = LOAD p
<data dependency>
STORE p, a shadow_x = LOAD shadow(x+99)
While there is a data dependency between x and shadow_x, there's no
barrier in kasan_populate_vmalloc() that forces the _un_poisoning to
be correctly ordered.
My worry would be that CPU#0 might commit the store to p before it
commits the store to the shadow. Then, even with the data dependency,
CPU#1 could observe store to shadow(a+99) after it executed the load
of shadow(x+99). This would lead CPU#1 to observe a false-positive
poison.
We need a write barrier, and Mark's proposed smp_wmb() is too early to
help here.
Now, there is an smp_wmb() in clear_vm_uninitialized_flag(), which is
called by __vmalloc_node_range between kasan_populate_vmalloc and the
end of the function. That makes things look like this:
CPU#0 CPU#1
STORE shadow(a+99), unpoision x = LOAD p
smp_wmb() <data dependency>
STORE p, a shadow_x = LOAD shadow(x+99)
memory-barriers.txt says that a data dependency and a write barrier
are sufficient to order this correctly.
Outside of __vmalloc_node_range(), the other times we call
kasan_populate_vmalloc() are:
- get_vm_area() and friends. get_vm_area does not mapping any pages
into the area returned. So the caller will have to do that, which
will require taking the page table lock. A release should pair with
a data dependency, making the unpoisoning visible.
- The per_cpu allocator: again the caller has to map pages into the
area returned - pcpu_map_pages calls map_kernel_range_noflush.
So, where the address is not known in advance, the unpoisioning does
need a barrier. However, we do hit one anyway before we return. We
should document that we're relying on the barrier in
clear_vm_uninitialized_flag() or barriers from other callers.
---------- Access to the poisioned region of shadow ----------
Now, what about the case that we do an overread that's still in the
shadow page?
CPU#0 CPU#1
STORE page+100, poison
...
# Mark's proposed smp_wmb()
ACQUIRE page_table_lock
STORE ptep, pte
RELEASE page_table_lock
...
STORE shadow(a+99), unpoision x = LOAD p
smp_wmb() <data dependency>
STORE p, a shadow_x = LOAD shadow(x+100)
Here, because of both the release and the smp_wmb(), the store of the
poison will be safe. Because we're not expecting anything funky with
fixed addresses or other CPUs doing page-table walks, I still think we
don't need an extra barrier where Mark has proposed.
-------------------- Conclusion --------------------
I will send a v10 that:
- drops the smp_wmb() for poisoning
- adds a comment that explains that we're dependent on later barriers
for _un_poisioning
I'd really like to get this into the coming merge window, if at all
possible.
Regards,
Daniel
Powered by blists - more mailing lists