lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 21 Jan 2021 00:12:47 +0000 From: Matthew Wilcox <willy@...radead.org> To: Mike Rapoport <rppt@...nel.org> Cc: Andrew Morton <akpm@...ux-foundation.org>, Alexander Viro <viro@...iv.linux.org.uk>, Andy Lutomirski <luto@...nel.org>, Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>, Christopher Lameter <cl@...ux.com>, Dan Williams <dan.j.williams@...el.com>, Dave Hansen <dave.hansen@...ux.intel.com>, David Hildenbrand <david@...hat.com>, Elena Reshetova <elena.reshetova@...el.com>, "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>, James Bottomley <jejb@...ux.ibm.com>, "Kirill A. Shutemov" <kirill@...temov.name>, Mark Rutland <mark.rutland@....com>, Mike Rapoport <rppt@...ux.ibm.com>, Michael Kerrisk <mtk.manpages@...il.com>, Palmer Dabbelt <palmer@...belt.com>, Paul Walmsley <paul.walmsley@...ive.com>, Peter Zijlstra <peterz@...radead.org>, Rick Edgecombe <rick.p.edgecombe@...el.com>, Roman Gushchin <guro@...com>, Shakeel Butt <shakeelb@...gle.com>, Shuah Khan <shuah@...nel.org>, Thomas Gleixner <tglx@...utronix.de>, Tycho Andersen <tycho@...ho.ws>, Will Deacon <will@...nel.org>, linux-api@...r.kernel.org, linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, linux-fsdevel@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, linux-nvdimm@...ts.01.org, linux-riscv@...ts.infradead.org, x86@...nel.org, Hagen Paul Pfeifer <hagen@...u.net>, Palmer Dabbelt <palmerdabbelt@...gle.com> Subject: Re: [PATCH v15 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation On Wed, Jan 20, 2021 at 08:06:08PM +0200, Mike Rapoport wrote: > +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) > { > + unsigned long nr_pages = (1 << PMD_PAGE_ORDER); > + struct gen_pool *pool = ctx->pool; > + unsigned long addr; > + struct page *page; > + int err; > + > + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN); > + if (!page) > + return -ENOMEM; Does cma_alloc() zero the pages it allocates? If not, where do we avoid leaking kernel memory to userspace?
Powered by blists - more mailing lists