lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170830164724.m6bbogd46ix4qp4o@docker>
Date:   Wed, 30 Aug 2017 09:47:24 -0700
From:   Tycho Andersen <tycho@...ker.com>
To:     Juerg Haefliger <juerg.haefliger@...onical.com>
Cc:     Mark Rutland <mark.rutland@....com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, kernel-hardening@...ts.openwall.com,
        Marco Benatto <marco.antonio.780@...il.com>
Subject: Re: [kernel-hardening] [PATCH v5 04/10] arm64: Add __flush_tlb_one()

On Wed, Aug 30, 2017 at 07:31:25AM +0200, Juerg Haefliger wrote:
> 
> 
> On 08/23/2017 07:04 PM, Mark Rutland wrote:
> > On Wed, Aug 23, 2017 at 10:58:42AM -0600, Tycho Andersen wrote:
> >> Hi Mark,
> >>
> >> On Mon, Aug 14, 2017 at 05:50:47PM +0100, Mark Rutland wrote:
> >>> That said, is there any reason not to use flush_tlb_kernel_range()
> >>> directly?
> >>
> >> So it turns out that there is a difference between __flush_tlb_one() and
> >> flush_tlb_kernel_range() on x86: flush_tlb_kernel_range() flushes all the TLBs
> >> via on_each_cpu(), where as __flush_tlb_one() only flushes the local TLB (which
> >> I think is enough here).
> > 
> > That sounds suspicious; I don't think that __flush_tlb_one() is
> > sufficient.
> > 
> > If you only do local TLB maintenance, then the page is left accessible
> > to other CPUs via the (stale) kernel mappings. i.e. the page isn't
> > exclusively mapped by userspace.
> 
> We flush all CPUs to get rid of stale entries when a new page is
> allocated to userspace that was previously allocated to the kernel.
> Is that the scenario you were thinking of?

I think there are two cases, the one you describe above, where the
pages are first allocated, and a second one, where e.g. the pages are
mapped into the kernel because of DMA or whatever. In the case you
describe above, I think we're doing the right thing (which is why my
test worked correctly, because it tested this case).

In the second case, when the pages are unmapped (i.e. the kernel is
done doing DMA), do we need to flush the other CPUs TLBs? I think the
current code is not quite correct, because if multiple tasks (CPUs)
map the pages, only the TLB of the last one is flushed when the
mapping is cleared, because the tlb is only flushed when ->mapcount
drops to zero, leaving stale entries in the other TLBs. It's not clear
to me what to do about this case.

Thoughts?

Tycho

> ...Juerg
> 
> 
> > Thanks,
> > Mark.
> > 
> 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ