lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150415212855.GI14842@suse.de>
Date:	Wed, 15 Apr 2015 22:28:55 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Hugh Dickins <hughd@...gle.com>
Cc:	Rik van Riel <riel@...hat.com>, Linux-MM <linux-mm@...ck.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Dave Hansen <dave.hansen@...el.com>,
	Andi Kleen <andi@...stfloor.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/4] mm: Send a single IPI to TLB flush multiple pages
 when unmapping

On Wed, Apr 15, 2015 at 02:16:49PM -0700, Hugh Dickins wrote:
> On Wed, 15 Apr 2015, Rik van Riel wrote:
> > On 04/15/2015 06:42 AM, Mel Gorman wrote:
> > > An IPI is sent to flush remote TLBs when a page is unmapped that was
> > > recently accessed by other CPUs. There are many circumstances where this
> > > happens but the obvious one is kswapd reclaiming pages belonging to a
> > > running process as kswapd and the task are likely running on separate CPUs.
> > > 
> > > On small machines, this is not a significant problem but as machine
> > > gets larger with more cores and more memory, the cost of these IPIs can
> > > be high. This patch uses a structure similar in principle to a pagevec
> > > to collect a list of PFNs and CPUs that require flushing. It then sends
> > > one IPI to flush the list of PFNs. A new TLB flush helper is required for
> > > this and one is added for x86. Other architectures will need to decide if
> > > batching like this is both safe and worth the memory overhead. Specifically
> > > the requirement is;
> > > 
> > > 	If a clean page is unmapped and not immediately flushed, the
> > > 	architecture must guarantee that a write to that page from a CPU
> > > 	with a cached TLB entry will trap a page fault.
> > > 
> > > This is essentially what the kernel already depends on but the window is
> > > much larger with this patch applied and is worth highlighting.
> > 
> > This means we already have a (hard to hit?) data corruption
> > issue in the kernel.  We can lose data if we unmap a writable
> > but not dirty pte from a file page, and the task writes before
> > we flush the TLB.
> 
> I don't think so.  IIRC, when the CPU needs to set the dirty bit,
> it doesn't just do that in its TLB entry, but has to fetch and update
> the actual pte entry - and at that point discovers it's no longer
> valid so traps, as Mel says.
> 

This is what I'm expecting i.e. clean->dirty transition is write-through
to the PTE which is now unmapped and it traps. I'm assuming there is an
architectural guarantee that it happens but could not find an explicit
statement in the docs. I'm hoping Dave or Andi can check with the relevant
people on my behalf.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ