lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1414496662-25202-1-git-send-email-will.deacon@arm.com>
Date:	Tue, 28 Oct 2014 11:44:20 +0000
From:	Will Deacon <will.deacon@....com>
To:	torvalds@...ux-foundation.org, peterz@...radead.org
Cc:	linux-kernel@...r.kernel.org, linux@....linux.org.uk,
	benh@...nel.crashing.org, Will Deacon <will.deacon@....com>
Subject: [RFC PATCH 0/2] Fix a couple of issues with zap_pte_range and MMU gather

Hi all,

This patch series attempts to fix a couple of issues I've noticed with
zap_pte_range and the MMU gather code on arm64.

Ths first fix resolves a TLB range truncation, which I found by code
inspection (this is on the batch failure path, which doesn't appear to
be regularly exercised on my system).

For the second fix, I'd really appreciate some comments. The problem is
that the architecture TLB batching implementation may update the start
and end fields of the gather structure, so that they actually cover only
a subset of the initial range set up by tlb_gather_mmu (based on calls
to tlb_remove_tlb_entry). In the force_flush case, zap_pte_range sets
these fields directly, which can result in a negative range if the
architecture has also updated the end address. The patch here uses
min(end, addr) as the end of the first range, which creates a second
range from that address to the end of the region. This results in a
potential over-invalidation on arm64, but I can't think of anything
better without updating (at least) the x86 tlb.h implementation.

Ideally, we'd let the architecture set start/end during the call to
tlb_flush_mmu_tlbonly (arm64 does this already in tlb_flush).

Thoughts?

Will


Will Deacon (2):
  zap_pte_range: update addr when forcing flush after TLB batching
    faiure
  zap_pte_range: fix partial TLB flushing in response to a dirty pte

 mm/memory.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

-- 
2.1.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ