lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Jan 2018 02:10:50 -0800
From:   Yu Zhao <yuzhao@...gle.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: don't expose page to fast gup before it's ready

On Tue, Jan 09, 2018 at 09:46:22AM +0100, Michal Hocko wrote:
> On Mon 08-01-18 14:56:32, Yu Zhao wrote:
> > We don't want to expose page before it's properly setup. During
> > page setup, we may call page_add_new_anon_rmap() which uses non-
> > atomic bit op. If page is exposed before it's done, we could
> > overwrite page flags that are set by get_user_pages_fast() or
> > it's callers. Here is a non-fatal scenario (there might be other
> > fatal problems that I didn't look into):
> > 
> > 	CPU 1				CPU1
> > set_pte_at()			get_user_pages_fast()
> > page_add_new_anon_rmap()		gup_pte_range()
> > 	__SetPageSwapBacked()			SetPageReferenced()
> > 
> > Fix the problem by delaying set_pte_at() until page is ready.
> 
> Have you seen this race happening in real workloads or this is a code
> review based fix or a theoretical issue? I am primarily asking because
> the code is like that at least throughout git era and I do not remember
> any issue like this. If you can really trigger this tiny race window
> then we should mark the fix for stable.

I didn't observe the race directly. But I did get few crashes when
trying to access mem_cgroup of pages returned by get_user_pages_fast().
Those page were charged and they showed valid mem_cgroup in kdumps.
So this led me to think the problem came from premature set_pte_at().

I think the fact that nobody complained about this problem is because
the race only happens when using ksm+swap, and it might not cause
any fatal problem even so. Nevertheless, it's nice to have set_pte_at()
done consistently after rmap is added and page is charged.

> Also what prevents reordering here? There do not seem to be any barriers
> to prevent __SetPageSwapBacked leak after set_pte_at with your patch.

I assumed mem_cgroup_commit_charge() acted as full barrier. Since you
explicitly asked the question, I realized my assumption doesn't hold
when memcg is disabled. So we do need something to prevent reordering
in my patch. And it brings up the question whether we want to add more
barrier to other places that call page_add_new_anon_rmap() and
set_pte_at().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ