lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87bjzq3nw0.fsf@gmail.com>
Date: Fri, 11 Oct 2024 16:30:31 +0530
From: Ritesh Harjani (IBM) <ritesh.list@...il.com>
To: Michael Ellerman <mpe@...erman.id.au>, linuxppc-dev@...ts.ozlabs.org
Cc: linux-mm@...ck.org, Sourabh Jain <sourabhjain@...ux.ibm.com>, Hari
 Bathini <hbathini@...ux.ibm.com>, Zi Yan <ziy@...dia.com>, David
 Hildenbrand <david@...hat.com>, "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>, Mahesh J Salgaonkar <mahesh@...ux.ibm.com>, Madhavan Srinivasan <maddy@...ux.ibm.com>, "Aneesh
 Kumar K . V" <aneesh.kumar@...nel.org>, Donet Tom <donettom@...ux.vnet.ibm.com>, LKML <linux-kernel@...r.kernel.org>, Sachin
 P Bappalige <sachinpb@...ux.ibm.com>
Subject: Re: [RFC v2 0/4] cma: powerpc fadump fixes

Michael Ellerman <mpe@...erman.id.au> writes:

> "Ritesh Harjani (IBM)" <ritesh.list@...il.com> writes:
>> Please find the v2 of cma related powerpc fadump fixes.
>>
>> Patch-1 is a change in mm/cma.c to make sure we return an error if someone uses
>> cma_init_reserved_mem() before the pageblock_order is initalized.
>>
>> I guess, it's best if Patch-1 goes via mm tree and since rest of the changes
>> are powerpc fadump fixes hence those should go via powerpc tree. Right?
>
> Yes I think that will work.
>
> Because there's no actual dependency on patch 1, correct?

There is no dependency, yes.

>
> Let's see if the mm folks are happy with the approach, and if so you
> should send patch 1 on its own, and patches 2-4 as a separate series.
>
> Then I can take the series (2-4) as fixes, and patch 1 can go via the mm
> tree (probably in next, not as a fix).
>

Sure. Since David has acked patch-1, let me split this into 2 series
as you mentioned above and re-send both seperately, so that it can be
picked up in their respective trees.

Will just do it in sometime. Thanks!

-ritesh


> cheers
>
>> v1 -> v2:
>> =========
>> 1. Review comments from David to call fadump_cma_init() after the
>>    pageblock_order is initialized. Also to catch usages if someone tries
>>    to call cma_init_reserved_mem() before pageblock_order is initialized.
>>
>> [v1]: https://lore.kernel.org/linuxppc-dev/c1e66d3e69c8d90988c02b84c79db5d9dd93f053.1728386179.git.ritesh.list@gmail.com/
>>
>> Ritesh Harjani (IBM) (4):
>>   cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
>>   fadump: Refactor and prepare fadump_cma_init for late init
>>   fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
>>   fadump: Move fadump_cma_init to setup_arch() after initmem_init()
>>
>>  arch/powerpc/include/asm/fadump.h  |  7 ++++
>>  arch/powerpc/kernel/fadump.c       | 55 +++++++++++++++---------------
>>  arch/powerpc/kernel/setup-common.c |  6 ++--
>>  mm/cma.c                           |  9 +++++
>>  4 files changed, 48 insertions(+), 29 deletions(-)
>>
>> --
>> 2.46.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ