lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170525004104.GA21336@js1304-desktop>
Date:   Thu, 25 May 2017 09:41:07 +0900
From:   Joonsoo Kim <js1304@...il.com>
To:     Dmitry Vyukov <dvyukov@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Alexander Potapenko <glider@...gle.com>,
        kasan-dev <kasan-dev@...glegroups.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>, kernel-team@....com
Subject: Re: [PATCH v1 00/11] mm/kasan: support per-page shadow memory to
 reduce memory consumption

On Wed, May 24, 2017 at 07:19:50PM +0200, Dmitry Vyukov wrote:
> On Wed, May 24, 2017 at 9:45 AM, Joonsoo Kim <js1304@...il.com> wrote:
> >> > What does make your current patch work then?
> >> > Say we map a new shadow page, update the page shadow to say that there
> >> > is mapped shadow. Then another CPU loads the page shadow and then
> >> > loads from the newly mapped shadow. If we don't flush TLB, what makes
> >> > the second CPU see the newly mapped shadow?
> >>
> >> /\/\/\/\/\/\
> >>
> >> Joonsoo, please answer this question above.
> >
> > Hello, I've answered it in another e-mail however it would not be
> > sufficient. I try again.
> >
> > If the page isn't used for kernel stack, slab, and global variable
> > (aka. kernel memory), black shadow is mapped for the page. We map a
> > new shadow page if the page will be used for kernel memory. We need to
> > flush TLB in all cpus when mapping a new shadow however it's not
> > possible in some cases. So, this patch does just flushing local cpu's
> > TLB. Another cpu could have stale TLB that points black shadow for
> > this page. If that cpu with stale TLB try to check vailidity of the
> > object on this page, result would be invalid since stale TLB points
> > the black shadow and it's shadow value is non-zero. We need a magic
> > here. At this moment, we cannot make sure if invalid is correct result
> > or not since we didn't do full TLB flush. So fixup processing is
> > started. It is implemented in check_memory_region_slow(). Flushing
> > local TLB and re-checking the shadow value. With flushing local TLB,
> > we will use fresh TLB at this time. Therefore, we can pass the
> > validity check as usual.
> >
> >> I am trying to understand if there is any chance to make mapping a
> >> single page for all non-interesting shadow ranges work. That would be
> >
> > This is what this patchset does. Mapping a single (zero/black) shadow
> > page for all non-interesting (non-kernel memory) shadow ranges.
> > There is only single instance of zero/black shadow page. On v1,
> > I used black shadow page only so fail to get enough performance. On
> > v2 mentioned in another thread, I use zero shadow for some region. I
> > guess that performance problem would be gone.
> 
> 
> I can't say I understand everything here, but after staring at the
> patch I don't understand why we need pshadow at all now. Especially
> with this commit
> https://github.com/JoonsooKim/linux/commit/be36ee65f185e3c4026fe93b633056ea811120fb.
> It seems that the current shadow is enough.

pshadow exists for non-kernel memory like as page cache or anonymous page.
This patch doesn't map a new shadow (per-byte shadow) for those pages
to reduce memory consumption. However, we need to know if those page
are allocated or not in order to check the validity of access to those
page. We cannot utilize zero/black shadow page here since mapping
single zero/black shadow page represents eight real page's shadow
value. Instead, we use per-page shadow here and mark/unmark it when
allocation and free happens. With it, we can know the state of the
page and we can determine the validity of access to them.

> If we see bad shadow when the actual shadow value is good, we fall
> onto slow path, flush tlb, reload shadow, see that it is good and
> return. Pshadow is not needed in this case.

For the kernel memory, if we see bad shadow due to *stale TLB*, we
fall onto slow path (check_memory_region_slow()) and flush tlb and
reload shadow.

For the non-kernel memory, if we see bad shadow, we fall onto
pshadow_val() check and we can see actual state of the page.

> If we see good shadow when the actual shadow value is bad, we return
> immediately and get false negative. Pshadow is not involved as well.
> What am I missing?

In this patchset, there is no case that we see good shadow when the
actual (p)shadow value is bad. This case should not happen since we
can miss actual error.

Please let me know that these explanation is insufficient. I will try
more. :)

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ