[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y14kn8ev.fsf@kamlesh.i-did-not-set--mail-host-address--so-tickle-me>
Date: Sun, 25 Aug 2024 17:04:00 +0530
From: Kamlesh Gurudasani <kamlesh@...com>
To: Waiman Long <longman@...hat.com>,
Steffen Klassert
<steffen.klassert@...unet.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Herbert Xu
<herbert@...dor.apana.org.au>
CC: <linux-crypto@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [EXTERNAL] Re: [PATCH] padata: Honor the caller's alignment in
case of chunk_size 0
Waiman Long <longman@...hat.com> writes:
> This message was sent from outside of Texas Instruments.
> Do not click links or open attachments unless you recognize the source of this email and know the content is safe.
> Report Suspicious
>
> On 8/21/24 17:02, Kamlesh Gurudasani wrote:
>> In the case where we are forcing the ps.chunk_size to be at least 1,
>> we are ignoring the caller's alignment.
>>
>> Move the forcing of ps.chunk_size to be at least 1 before rounding it
>> up to caller's alignment, so that caller's alignment is honored.
>>
>> While at it, use max() to force the ps.chunk_size to be at least 1 to
>> improve readability.
>>
>> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
>> Signed-off-by: Kamlesh Gurudasani <kamlesh@...com>
>> ---
>> kernel/padata.c | 12 ++++--------
>> 1 file changed, 4 insertions(+), 8 deletions(-)
>>
>> diff --git a/kernel/padata.c b/kernel/padata.c
>> index 0fa6c2895460..d8a51eff1581 100644
>> --- a/kernel/padata.c
>> +++ b/kernel/padata.c
>> @@ -509,21 +509,17 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>>
>> /*
>> * Chunk size is the amount of work a helper does per call to the
>> - * thread function. Load balance large jobs between threads by
>> + * thread function. Load balance large jobs between threads by
>> * increasing the number of chunks, guarantee at least the minimum
>> * chunk size from the caller, and honor the caller's alignment.
>> + * Ensure chunk_size is at least 1 to prevent divide-by-0
>> + * panic in padata_mt_helper().
>> */
>> ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
>> ps.chunk_size = max(ps.chunk_size, job->min_chunk);
>> + ps.chunk_size = max(ps.chunk_size, 1ul);
>> ps.chunk_size = roundup(ps.chunk_size, job->align);
>>
>> - /*
>> - * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
>> - * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
>> - */
>> - if (!ps.chunk_size)
>> - ps.chunk_size = 1U;
>> -
>> list_for_each_entry(pw, &works, pw_list)
>> if (job->numa_aware) {
>> int old_node = atomic_read(&last_used_nid);
>>
>> ---
>> base-commit: b311c1b497e51a628aa89e7cb954481e5f9dced2
>> change-id: 20240822-max-93c17adc6457
>
> LGTM, my only nit is the use of "1ul" which is less common and harder to
> read than "1UL" as the former one may be misread as a "lul" variable.
>
> Acked-by: Waiman Long <longman@...hat.com>
Thanks for the Acked-by, Waiman. I understand your point, though Daniel seems
to be okay with this, so will keep it as is this time.
Cheers,
Kamlesh
Powered by blists - more mailing lists