lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Wed, 21 Feb 2018 11:57:58 +0000
From:   Will Deacon <will.deacon@....com>
To:     "Wangxuefeng (E)" <wxf.wang@...ilicon.com>
Cc:     "toshi.kani" <toshi.kani@....com>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        cpandya <cpandya@...eaurora.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        "Guohanjun (Hanjun Guo)" <guohanjun@...wei.com>,
        Linuxarm <linuxarm@...wei.com>, linux-mm <linux-mm@...ck.org>,
        akpm <akpm@...ux-foundation.org>,
        "mark.rutland" <mark.rutland@....com>,
        "catalin.marinas" <catalin.marinas@....com>,
        mhocko <mhocko@...e.com>, "hanjun.guo" <hanjun.guo@...aro.org>
Subject: Re: 答复: [RFC patch] ioremap: don't set up huge I/O mappings when p4d/pud/pmd is zero

[sorry, trying to deal with top-posting here]

On Wed, Feb 21, 2018 at 07:36:34AM +0000, Wangxuefeng (E) wrote:
>      The old flow of reuse the 4k page as 2M page does not follow the BBM flow
> for page table reconstruction,not only the memory leak problems.  If BBM flow
> is not followed,the speculative prefetch of tlb will made false tlb entries
> cached in MMU, the false address will be got, panic will happen.

If I understand Toshi's suggestion correctly, he's saying that the PMD can
be cleared when unmapping the last PTE (like try_to_free_pte_page). In this
case, there's no issue with the TLB because this is exactly BBM -- the PMD
is cleared and TLB invalidation is issued before the PTE table is freed. A
subsequent 2M map request will see an empty PMD and put down a block
mapping.

The downside is that freeing becomes more expensive as the last level table
becomes more sparsely populated and you need to ensure you don't have any
concurrent maps going on for the same table when you're unmapping. I also
can't see a neat way to fit this into the current vunmap code. Perhaps we
need an iounmap_page_range.

In the meantime, the code in lib/ioremap.c looks totally broken so I think
we should deselect CONFIG_HAVE_ARCH_HUGE_VMAP on arm64 until it's fixed.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ