lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a3aebf74-be74-bf15-f016-da9734ba435a@shipmail.org>
Date:   Thu, 3 Oct 2019 20:03:12 +0200
From:   Thomas Hellström (VMware) 
        <thomas_os@...pmail.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Thomas Hellstrom <thellstrom@...are.com>
Cc:     Linux-MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Will Deacon <will.deacon@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Rik van Riel <riel@...riel.com>,
        Minchan Kim <minchan@...nel.org>,
        Michal Hocko <mhocko@...e.com>,
        Huang Ying <ying.huang@...el.com>,
        Jérôme Glisse <jglisse@...hat.com>,
        "Kirill A . Shutemov" <kirill@...temov.name>
Subject: Re: [PATCH v3 3/7] mm: Add write-protect and clean utilities for
 address space ranges

On 10/3/19 6:55 PM, Linus Torvalds wrote:
>
>   d) Fix the pte walker to do the right thing, then just use separate
> pte walkers in your code
>
> The fix would be those two conceptual changes:
>
>   1) don't split if the walker asks for a pmd_entry (the walker itself
> can then decide to split, of course, but right now no walkers want it
> since there are no pmd _and_ pte walkers, because people who want that
> do the pte walk themselves)
>
>   2) get the proper page table lock if you do walk the pte, since
> otherwise it's racy
>
> Then there won't be any code duplication, because all the duplication
> you now have at the pmd level is literally just workarounds for the
> fact that our current walker has this bug.

I actually started on d) already when Kirill asked me to unify the 
pud_entry() and pmd_entry()
callbacks.

>
> That "fix the pte walker" would be one preliminary patch that would
> look something like the attached TOTALLY UNTESTED garbage.
>
> I call it "garbage" because I really hope people take it just as what
> it is: "something like this". It compiles for me, and I did try to
> think it through, but I might have missed some big piece of the
> picture when writing that patch.
>
> And yes, this is a much bigger conceptual change for the VM layer, but
> I really think our pagewalk code is actively buggy right now, and is
> forcing users to do bad things because they work around the existing
> limitations.
>
> Hmm? Could some of the core mm people look over that patch?
>
> And yes, I was tempted to move the proper pmd locking into the walker
> too, and do
>
>          ptl = pmd_trans_huge_lock(pmd, vma);
>          if (ptl) {
>                  err = ops->pmd_entry(pmd, addr, next, walk);
>                  spin_unlock(ptl);
>                  ...
>
> but while I think that's the correct thing to do in the long run, that
> would have to be done together with changing all the existing
> pmd_entry users. It would make the pmd_entry _solely_ handle the
> hugepage case, and then you'd have to remove the locking in the
> pmd_entry, and have to make the pte walking be a walker entry. But
> that would _really_ clean things up, and would make things like
> smaps_pte_range() much easier to read, and much more obvious (it would
> be split into a smaps_pmd_range and smaps_pte_range, and the callbacks
> wouldn't need to know about the complex locking).
>
> So I think this is the right direction to move into, but I do want
> people to think about this, and think about that next phase of doing
> the pmd_trans_huge_lock too.

>
> Comments?
>
>                  Linus

I think if we take the ptl lock outside the callback, we'd need to allow 
the callback to unlock to do non-atomic things or to avoid recursive 
locking if it decides to split in the callback. FWIW The pud_entry call 
is being made locked, but the only current implementation appears to 
happily ignore that from what I can tell.

And if we allow unlocking or call the callback unlocked, the callback 
needs to tell us whether the entry was actually handled or if we need to 
fallback to the next level. Perhaps using a positive PAGE_WALK_FALLBACK 
return value? That would allow current implementations to be unmodified.

/Thomas


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ