lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 11 Nov 2020 15:53:21 +0100 From: David Hildenbrand <david@...hat.com> To: linux-kernel@...r.kernel.org Cc: linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org, David Hildenbrand <david@...hat.com>, Michael Ellerman <mpe@...erman.id.au>, Benjamin Herrenschmidt <benh@...nel.crashing.org>, Paul Mackerras <paulus@...ba.org>, Rashmica Gupta <rashmica.g@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, Mike Rapoport <rppt@...nel.org>, Michal Hocko <mhocko@...e.com>, Oscar Salvador <osalvador@...e.de>, Wei Yang <richard.weiyang@...ux.alibaba.com> Subject: [PATCH v2 7/8] powerpc/mm: remove linear mapping if __add_pages() fails in arch_add_memory() Let's revert what we did in case seomthing goes wrong and we return an error - as already done on arm64 and s390x. Cc: Michael Ellerman <mpe@...erman.id.au> Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org> Cc: Paul Mackerras <paulus@...ba.org> Cc: Rashmica Gupta <rashmica.g@...il.com> Cc: Andrew Morton <akpm@...ux-foundation.org> Cc: Mike Rapoport <rppt@...nel.org> Cc: Michal Hocko <mhocko@...e.com> Cc: Oscar Salvador <osalvador@...e.de> Cc: Wei Yang <richard.weiyang@...ux.alibaba.com> Signed-off-by: David Hildenbrand <david@...hat.com> --- arch/powerpc/mm/mem.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index c5755b9efb64..8b946ec68d1b 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -170,7 +170,10 @@ int __ref arch_add_memory(int nid, u64 start, u64 size, rc = arch_create_linear_mapping(nid, start, size, params); if (rc) return rc; - return __add_pages(nid, start_pfn, nr_pages, params); + rc = __add_pages(nid, start_pfn, nr_pages, params); + if (rc) + arch_remove_linear_mapping(start, size); + return rc; } void __ref arch_remove_memory(int nid, u64 start, u64 size, -- 2.26.2
Powered by blists - more mailing lists