lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YY5eYEzsp0UKn4Xr@kernel.org>
Date:   Fri, 12 Nov 2021 14:30:24 +0200
From:   Mike Rapoport <rppt@...nel.org>
To:     "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
Cc:     "x86@...nel.org" <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "hpa@...or.com" <hpa@...or.com>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "rppt@...ux.ibm.com" <rppt@...ux.ibm.com>,
        "Lutomirski, Andy" <luto@...nel.org>, "bp@...en8.de" <bp@...en8.de>
Subject: Re: [PATCH 4/4] x86/mm: replace GFP_ATOMIC with GFP_KERNEL for
 direct map allocations

On Thu, Nov 11, 2021 at 09:35:56PM +0000, Edgecombe, Rick P wrote:
> On Thu, 2021-11-11 at 13:02 +0200, Mike Rapoport wrote:
> > The allocations of the direct map pages are mostly happen very early
> > during
> > the system boot and they use either the page table cache in brk area
> > of bss
> > or memblock.
> > 
> > The few callers that effectively use page allocator for the direct
> > map
> > updates are gart_iommu_init() and memory hotplug. Neither of them
> > happen in
> > an atomic context so there is no reason to use GFP_ATOMIC for these
> > allocations.
> 
> There are some other places where these paths could get triggered.
> alloc_low_pages() gets called by a bunch of memremap_pages() callers.
> spp_getpage() gets called from the set_fixmap() family of functions. I
> guess you are saying those should not end up triggering an allocation
> post-after_bootmem?
> 
> I went ahead and did a search, and found this getting called in a timer
> delay:
> ghes_poll_func()
>   spin_lock_irqsave()
>   ghes_proc()
>     ghes_read_estatus()
>       __ghes_read_estatus()
>         ghes_copy_tofrom_phys()
>           ghes_map()
>             __set_fixmap()
>               ...spp_getpage()?
> 
> I’m not sure if it’s possible to hit, but potentially it could splat
> about not being able to sleep? It would depend on something else not
> already mapping the needed fixmap pte, which maybe would never happen.
> It seems a little rickety though.

The fixmap is less 2M so all the page tables will be allocated from gpt
cache/memblock so __set_fixmap() will be essentially a call to set_pte().

I'll see how to ensure that page tables for ghex fixmaps are explicitly
preallocated at init time.
 
> For alloc_low_pages(), I noticed the callers don’t check for allocation
> failure. I'm a little surprised that there haven't been reports of the
> allocation failing, because these operations could result in a lot more
> pages getting allocated way past boot, and failure causes a NULL
> pointer dereference.

The allocations at init time are really unlikely to succeed.
As for the memory hotplug, the hotplug will likely fail if there is no
memory, but the failure may be attributed to an error elsewhere.

I'm all for adding checks for allocation errors, but I don't think this is
strictly related to this patch.
 
> I checked over the alloc_low_pages() callers and I didn’t see any
> problems removing GFP_ATOMIC, but I wonder if it should try harder to
> allocate. Or properly check for allocation failure in the callers, to
> prevent pre-existing risk of crash. GFP_KERNEL doesn’t look to make it
> any worse though, and I guess probably slightly less likely to crash.

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ