[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <556230A9.8030307@mentor.com>
Date: Sun, 24 May 2015 23:12:25 +0300
From: Vladimir Zapolskiy <vladimir_zapolskiy@...tor.com>
To: Philipp Zabel <p.zabel@...gutronix.de>
CC: Heiko Stübner <heiko@...ech.de>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 1/9] misc: sram: fix enabled clock leak on error path
Hi Philipp,
On 20.05.2015 14:30, Philipp Zabel wrote:
> Hi Vladimir,
>
> Am Dienstag, den 19.05.2015, 16:11 +0300 schrieb Vladimir Zapolskiy:
>> Hi Philipp,
>>
>> On 19.05.2015 13:41, Philipp Zabel wrote:
>>> Am Montag, den 18.05.2015, 22:08 +0300 schrieb Vladimir Zapolskiy:
>>>> If devm_gen_pool_create() fails, the previously enabled sram->clk is
>>>> not disabled on probe() exit.
>>>>
>>>> Signed-off-by: Vladimir Zapolskiy <vladimir_zapolskiy@...tor.com>
>>>> ---
>>>> drivers/misc/sram.c | 9 +++++----
>>>> 1 file changed, 5 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
>>>> index eeaaf5f..b44a423 100644
>>>> --- a/drivers/misc/sram.c
>>>> +++ b/drivers/misc/sram.c
>>>> @@ -90,16 +90,17 @@ static int sram_probe(struct platform_device *pdev)
>>>> if (!sram)
>>>> return -ENOMEM;
>>>>
>>>> + sram->pool = devm_gen_pool_create(&pdev->dev,
>>>> + ilog2(SRAM_GRANULARITY), -1);
>>>> + if (!sram->pool)
>>>> + return -ENOMEM;
>>>> +
>>>> sram->clk = devm_clk_get(&pdev->dev, NULL);
>>>> if (IS_ERR(sram->clk))
>>>> sram->clk = NULL;
>>>> else
>>>> clk_prepare_enable(sram->clk);
>>>
>>> Here you move sram->clk around, and later in patch 7 it gets moved
>>> again. To me it looks like the two should be squashed together.
>>
>> I agree with you, instead of moving sram->pool up it is better to place
>> sram->clk right at the end of probe(), in other words this patch can be
>> safely merged with patch 7 and the series becomes a bit shorter.
>>
>> Thank you for the finding, I'm going to resend the change, please let me
>> know your opinion about "%pa" vs "0x%llx", if it is needed to be changed
>> or not.
>
> I'd prefer to use %pa for the phys_addr_t types. You could argue that
> %pa is inappropriate as those are addresses relative to the SRAM region,
> not physical addresses as seen by the CPU. But following that argument,
> using phys_addr_t in the first place would not be correct either.
The driver utilizes genalloc gen_pool_add_virt() function, whose
corresponding argument is of phys_addr_t type (and it is correct in my
opinion), the interface to this function dictates the type of its
arguments on client's side.
> Which leads me to question whether we will see larger than 4 GiB SRAM
> regions in the foreseeable future?
The question is not only about SRAM region size, but mainly it is about
the base address of SRAM, and don't want to exclude a situation, when
some kind of SRAM device is found outside of u32 addressable memory
space. Actually I believe an arbitrary physical memory region may be
claimed as it were a "SRAM device" and the driver still works fine.
If phys_addr_t arguments are accepted, then back to "%pa" vs "0x%llx"
question:
From: Philipp Zabel <p.zabel@...gutronix.de>
Date: Tue, 19 May 2015 12:38:37 +0200
In-Reply-To:
<1431976122-4228-4-git-send-email-vladimir_zapolskiy@...tor.com>
> Now that block->start is of type phys_addr_t, is there a reason
> not to use %pa ?
[snip]
>> - dev_dbg(&pdev->dev, "adding chunk 0x%lx-0x%lx\n",
>> - cur_start, cur_start + cur_size);
>> + dev_dbg(&pdev->dev, "adding chunk 0x%llx-0x%llx\n",
>> + (unsigned long long)cur_start,
>> + (unsigned long long)cur_start + cur_size);
>
> dev_dbg(&pdev->dev, "adding chunk 0x%pa-0x%pa\n",
> &cur_start, &block->start);
>
I see that the change preserves the functionality, so I'll change printk
format to "%pa", will resend the series tomorrow.
--
With best wishes,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists