lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 10 Jan 2015 20:47:03 +0100
From:	Laszlo Ersek <lersek@...hat.com>
To:	Will Deacon <will.deacon@....com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	Mark Langsdorf <mlangsdo@...hat.com>,
	Marc Zyngier <Marc.Zyngier@....com>,
	Mark Rutland <Mark.Rutland@....com>,
	Steve Capper <steve.capper@...aro.org>,
	"vishnu.ps@...sung.com" <vishnu.ps@...sung.com>,
	main kernel list <linux-kernel@...r.kernel.org>,
	arm kernel list <linux-arm-kernel@...ts.infradead.org>,
	Kyle McMartin <kmcmarti@...hat.com>
Subject: Re: Linux 3.19-rc3

On 01/10/15 14:37, Will Deacon wrote:

> My hunch is that when a task exits and sets fullmm, end is zero and so the
> old need_flush cases no longer run.

(Disclaimer: I'm completely unfamiliar with this code.)

If you have the following call chain in mind:

  exit_mmap()
    tlb_gather_mmu()

then I think that (fullmm != 0) precludes (end == 0).

I grepped the tree for "fullmm", and only tlb_gather_mmu() seems to set
it. There are several instances of that function, but each sets fullmm to:

	/* Is it from 0 to ~0? */
	tlb->fullmm     = !(start | (end+1));

So, a nonzero fullmm seems to imply (end == ~0UL).

(And sure enough, exit_mmap() passes it ((unsigned long)-1) as "end").

> With my original patch, we skipped the
> TLB invalidation (since the task is exiting and we will invalidate the TLB
> for that ASID before the ASID is reallocated) but still did the freeing.
> With the current code, we skip the freeing too, which causes us to leak
> pages on exit.

Yes, the new check prevents

  tlb_flush_mmu()
    tlb_flush_mmu_free()  <--- this
      free_pages_and_swap_cache()

> I guess we can either check need_flush as well as end, or we could set both
> start == end == some_nonzero_value in __tlb_adjust_range when need_flush is
> set. Unfortunately, I'm away from my h/w right now, so it's not easy to test
> this.

If you have a patch that applies and builds, I'm glad to test it. I got
a few hours now and I'll have some tomorrow as well. (On Monday I guess
you'll have access to your hardware again.)

Thanks!
Laszlo

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ