lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bc42aa9c-2dc3-454e-800b-43928ac60a6d@amd.com>
Date: Fri, 13 Jun 2025 13:07:27 -0500
From: Tanmay Shah <tanmay.shah@....com>
To: Mathieu Poirier <mathieu.poirier@...aro.org>
Cc: andersson@...nel.org, linux-remoteproc@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] remoteproc: xlnx: allow single core use in split mode



On 6/13/25 12:18 PM, Mathieu Poirier wrote:
> Good day,
> 
> On Tue, Jun 10, 2025 at 12:27:38PM -0700, Tanmay Shah wrote:
>> It's a valid use case to have only one core enabled in cluster in split
>> mode. Remove exact core count expecatation from the driver.
> 
> I suggest:
> 
> "When operating in split mode, it is a valid usecase to have only one core
> enabled in the cluster. Remove..."
> 

Ack, will update commit message in next rev.

>>
>> Signed-off-by: Tanmay Shah <tanmay.shah@....com>
>> ---
>>
>> Change in v2:
>>    - limit core_count to max 2
>>
>>   drivers/remoteproc/xlnx_r5_remoteproc.c | 5 +----
>>   1 file changed, 1 insertion(+), 4 deletions(-)
>>
>> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
>> index 1af89782e116..a1beaa2acc96 100644
>> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
>> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
>> @@ -1336,12 +1336,9 @@ static int zynqmp_r5_cluster_init(struct zynqmp_r5_cluster *cluster)
>>   	 * and ignore core1 dt node.
>>   	 */
>>   	core_count = of_get_available_child_count(dev_node);
>> -	if (core_count == 0) {
>> +	if (core_count == 0 || core_count > 2) {
>>   		dev_err(dev, "Invalid number of r5 cores %d", core_count);
>>   		return -EINVAL;
>> -	} else if (cluster_mode == SPLIT_MODE && core_count != 2) {
>> -		dev_err(dev, "Invalid number of r5 cores for split mode\n");
>> -		return -EINVAL;
>>   	} else if (cluster_mode == LOCKSTEP_MODE && core_count == 2) {
>>   		dev_warn(dev, "Only r5 core0 will be used\n");
>>   		core_count = 1;
> 
> When thinking about the specific usecase where, in split mode, a single core is
> enabled - can it be either core0 or core1 or does it have to be core0?
> 

Correct. It doesn't have to be core0.

> Is the code in the driver ready to handle this configuration?
> 

Yes, driver will handle all following cases correctly.

Case 1: core0 enabled, core1 disabled
Case 2: core0 disabled, core1 enabled
Case 3: core0 enabled, core 1 enabled

I will document all cases in the comment in the driver.

> The inline comments you already have to explain the possible configurations
> need to be update to address this new usecase.
> 
> Thanks,
> Mathieu
> 
>>
>> base-commit: dc8417021bcd01914a416bf8bab811a6c5e7d99a
>> -- 
>> 2.34.1
>>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ