lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a748e4b2-70f6-4bc4-b48a-21816848fee2@linaro.org>
Date: Thu, 15 Aug 2024 17:16:54 +0100
From: Bryan O'Donoghue <bryan.odonoghue@...aro.org>
To: Depeng Shao <quic_depengs@...cinc.com>, rfoss@...nel.org,
 todor.too@...il.com, mchehab@...nel.org, robh@...nel.org,
 krzk+dt@...nel.org, conor+dt@...nel.org
Cc: linux-arm-msm@...r.kernel.org, linux-media@...r.kernel.org,
 devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
 kernel@...cinc.com, Yongsheng Li <quic_yon@...cinc.com>
Subject: Re: [PATCH 13/13] media: qcom: camss: Add support for VFE hardware
 version Titan 780

On 15/08/2024 14:33, Depeng Shao wrote:
> Hi Bryan,
> 
> On 8/15/2024 12:23 AM, Bryan O'Donoghue wrote:
> 
>>> @@ -674,15 +675,17 @@ int vfe_reset(struct vfe_device *vfe)
>>>   {
>>>       unsigned long time;
>>> -    reinit_completion(&vfe->reset_complete);
>>> +    if (vfe->res->hw_ops->global_reset) {
>>> +        reinit_completion(&vfe->reset_complete);
>>> -    vfe->res->hw_ops->global_reset(vfe);
>>> +        vfe->res->hw_ops->global_reset(vfe);
>>> -    time = wait_for_completion_timeout(&vfe->reset_complete,
>>> -        msecs_to_jiffies(VFE_RESET_TIMEOUT_MS));
>>> -    if (!time) {
>>> -        dev_err(vfe->camss->dev, "VFE reset timeout\n");
>>> -        return -EIO;
>>> +        time = wait_for_completion_timeout(&vfe->reset_complete,
>>> +            msecs_to_jiffies(VFE_RESET_TIMEOUT_MS));
>>> +        if (!time) {
>>> +            dev_err(vfe->camss->dev, "VFE reset timeout\n");
>>> +            return -EIO;
>>> +        }
>>
>> Per my comment on the CSID - this feels like a fix you are introducing 
>> here in the guise of a silicon add.
>>
>> Please break it up.
>>
>> If you have a number of fixes to core functionality they need to be
>>
>> 1. Granular and individual
>> 2. Indivdually scrutable with their own patch and descritption
>> 3. git cherry-pickable
>> 4. Have a Fixes tag
>> 5. And be cc'd to stable@...r.kernel.org
>>
>> Can't accept either the fixes or the silicon add if the two live mixed 
>> up in one patch.
>>
> 
> This isn't a bug fix, adding a null pointer checking just because vfe780 
> doesn't have enable_irq/global_reset/isr/vfe_halt hw_ops, so adding the 
> null checking for these hw_ops in this patch and adding them in one patch.
> The original code doesn't have any bug.

Right but you could just have

static void vfe_global_reset(struct vfe_device *vfe)
{
         /* VFE780 has no global reset, simply report a completion */
         complete(&vfe->reset_complete);
}

const struct vfe_hw_ops vfe_ops_780 = {
	.global_reset = vfe_global_reset,
}

Instead of having a bunch of special cases in the top level code.

> 
>>> diff --git a/drivers/media/platform/qcom/camss/camss-vfe.h b/drivers/ 
>>> media/platform/qcom/camss/camss-vfe.h
>>> index fcbf4f609129..9dec5bc0d1b1 100644
>>> --- a/drivers/media/platform/qcom/camss/camss-vfe.h
>>> +++ b/drivers/media/platform/qcom/camss/camss-vfe.h
>>> @@ -243,6 +243,7 @@ extern const struct vfe_hw_ops vfe_ops_4_7;
>>>   extern const struct vfe_hw_ops vfe_ops_4_8;
>>>   extern const struct vfe_hw_ops vfe_ops_170;
>>>   extern const struct vfe_hw_ops vfe_ops_480;
>>> +extern const struct vfe_hw_ops vfe_ops_780;
>>>   int vfe_get(struct vfe_device *vfe);
>>>   void vfe_put(struct vfe_device *vfe);
>>> diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/ 
>>> media/platform/qcom/camss/camss.c
>>> index 7ee102948dc4..92a0fa02e415 100644
>>> --- a/drivers/media/platform/qcom/camss/camss.c
>>> +++ b/drivers/media/platform/qcom/camss/camss.c
>>> @@ -1666,6 +1666,125 @@ static const struct camss_subdev_resources 
>>> csid_res_8550[] = {
>>>       }
>>>   };
>>> +static const struct camss_subdev_resources vfe_res_8550[] = {
>>> +    /* VFE0 */
>>> +    {
>>> +        .regulators = {},
>>> +        .clock = { "gcc_axi_hf", "cpas_ahb", "cpas_fast_ahb_clk", 
>>> "vfe0_fast_ahb",
>>> +               "vfe0", "cpas_vfe0", "camnoc_axi" },
>>
>> Should the camnoc AXI clock go here or in the CSID ?
>>
> 
> camnoc is responsible for ddr writing, so it is needed for the WM in vfe.

Right, I don't recall if you specified the clock for CSID and VFE but 
just for the record it should appear in only the one block.. VFE sounds 
good to me.

> 
> 
>>> +    /* VFE4 lite */
>>> +    {
>>> +        .regulators = {},
>>> +        .clock = { "gcc_axi_hf", "cpas_ahb", "cpas_fast_ahb_clk", 
>>> "vfe_lite_ahb",
>>> +               "vfe_lite", "cpas_ife_lite", "camnoc_axi" },
>>> +        .clock_rate = {    { 0, 0, 0, 0, 0 },
>>> +                { 0, 0, 0, 0, 80000000 },
>>> +                { 300000000, 300000000, 400000000, 400000000, 
>>> 400000000 },
>>> +                { 300000000, 300000000, 400000000, 400000000, 
>>> 400000000 },
>>
>> I realise you're specifying all of the operating points here but the 
>> clock only needs to appear once i.e.
>>
>> 1 x 300 MHz
>> 1 x 400 MHz
>> 1 x 480 MHz
>>
>> etc.
>>
> 
> Sure, will update in next series.
> 
>>> +                { 400000000, 480000000, 480000000, 480000000, 
>>> 480000000 },
>>> +                { 300000000, 300000000, 400000000, 400000000, 
>>> 400000000 },
>>> +                { 300000000, 300000000, 400000000, 400000000, 
>>> 400000000 } },
>>> +        .reg = { "vfe_lite1" },
>>> +        .interrupt = { "vfe_lite1" },
>>> +        .vfe = {
>>> +            .line_num = 4,
>>> +            .is_lite = true,
>>> +            .hw_ops = &vfe_ops_780,
>>> +            .formats_rdi = &vfe_formats_rdi_845,
>>> +            .formats_pix = &vfe_formats_pix_845
>>> +        }
>>> +    },
>>> +};
> 
>>> +void camss_reg_update(struct camss *camss, int hw_id, int port_id, 
>>> bool is_clear)
>>> +{
>>> +    struct csid_device *csid;
>>> +
>>> +    if (hw_id < camss->res->csid_num) {
>>
>> Does this cause do anything ? Is it just defensive programming ? Can 
>> the hw_id index exceed the number of CSIDs defined and if so why ?
>>
>> Smells wrong.
>>
> 
> It is just a defensive programming, just like some null pointer checking.

Right so, please drop then. We require the indexes to be in range in 
order to merge, our job is to make sure this is so.

Lets just reason about the code and make sure the indexes are right.

---
bod


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ