lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <675fd7bf-7551-4f95-9b9c-8a2151e59ee1@rock-chips.com>
Date:   Fri, 21 Jul 2017 15:54:40 +0800
From:   xxm <xxm@...k-chips.com>
To:     Heiko Stuebner <heiko@...ech.de>
Cc:     Joerg Roedel <joro@...tes.org>, linux-rockchip@...ts.infradead.org,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 2/3] iommu/rockchip: add multi irqs support

Hi Heiko,


On 07/21/2017 03:07 PM, Heiko Stuebner wrote:
> Am Freitag, 21. Juli 2017, 14:27:09 CEST schrieb Simon Xue:
>> From: Simon <xxm@...k-chips.com>
>>
>> RK3368 vpu mmu have two irqs, this patch support multi irqs
>>
>> Signed-off-by: Simon <xxm@...k-chips.com>
>> ---
>> changes since V1:
>>   - use devm_kcalloc instead of devm_kzalloc when alloc irq array
>>
>>   drivers/iommu/rockchip-iommu.c | 34 ++++++++++++++++++++++++----------
>>   1 file changed, 24 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
>> index 4ba48a2..3c462c0 100644
>> --- a/drivers/iommu/rockchip-iommu.c
>> +++ b/drivers/iommu/rockchip-iommu.c
>> @@ -90,7 +90,8 @@ struct rk_iommu {
>>   	struct device *dev;
>>   	void __iomem **bases;
>>   	int num_mmu;
>> -	int irq;
>> +	int *irq;
>> +	int num_irq;
>>   	struct iommu_device iommu;
>>   	struct list_head node; /* entry in rk_iommu_domain.iommus */
>>   	struct iommu_domain *domain; /* domain to which iommu is attached */
>> @@ -825,10 +826,12 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
>>   
>>   	iommu->domain = domain;
>>   
>> -	ret = devm_request_irq(iommu->dev, iommu->irq, rk_iommu_irq,
>> -			       IRQF_SHARED, dev_name(dev), iommu);
>> -	if (ret)
>> -		return ret;
>> +	for (i = 0; i < iommu->num_irq; i++) {
>> +		ret = devm_request_irq(iommu->dev, iommu->irq[i], rk_iommu_irq,
>> +				       IRQF_SHARED, dev_name(dev), iommu);
>> +		if (ret)
>> +			return ret;
>> +	}
>>   
>>   	for (i = 0; i < iommu->num_mmu; i++) {
>>   		rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR,
>> @@ -878,7 +881,8 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
>>   	}
>>   	rk_iommu_disable_stall(iommu);
>>   
>> -	devm_free_irq(iommu->dev, iommu->irq, iommu);
>> +	for (i = 0; i < iommu->num_irq; i++)
>> +		devm_free_irq(iommu->dev, iommu->irq[i], iommu);
>>   
>>   	iommu->domain = NULL;
>>   
>> @@ -1157,10 +1161,20 @@ static int rk_iommu_probe(struct platform_device *pdev)
>>   	if (iommu->num_mmu == 0)
>>   		return PTR_ERR(iommu->bases[0]);
>>   
>> -	iommu->irq = platform_get_irq(pdev, 0);
>> -	if (iommu->irq < 0) {
>> -		dev_err(dev, "Failed to get IRQ, %d\n", iommu->irq);
>> -		return -ENXIO;
>> +	while (platform_get_irq(pdev, iommu->num_irq) >= 0)
>> +		iommu->num_irq++;
> Hmm, this could also result in a iommu having 0 irqs if wrongly
> configured and probe would still suceed. This sounds somehow
> wrong to me.
>
> But I'm not sure if there is precedent on how to handle a variable
> number of interrupts correctly somewhere.

How about add a judgement for iommu->num_irq ? like this:
if (!iommu->num_irq)
	return -ENOXIO;

>
> Heiko
>
>> +
>> +	iommu->irq = devm_kcalloc(dev, iommu->num_irq, sizeof(*iommu->irq),
>> +				  GFP_KERNEL);
>> +	if (!iommu->irq)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < iommu->num_irq; i++) {
>> +		iommu->irq[i] = platform_get_irq(pdev, i);
>> +		if (iommu->irq[i] < 0) {
>> +			dev_err(dev, "Failed to get IRQ, %d\n", iommu->irq[i]);
>> +			return -ENXIO;
>> +		}
>>   	}
>>   
>>   	err = iommu_device_sysfs_add(&iommu->iommu, dev, NULL, dev_name(dev));
>>
>
>
>
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ