[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1fec0936-a7dd-43e4-8ad6-a18df35c6d9d@huawei.com>
Date: Wed, 8 Jan 2025 17:59:10 +0800
From: wangyushan <wangyushan12@...wei.com>
To: Christophe JAILLET <christophe.jaillet@...adoo.fr>,
<xuwei5@...ilicon.com>, <yangyicong@...ilicon.com>,
<Jonathan.Cameron@...wei.com>, <wangjie125@...wei.com>,
<linux-kernel@...r.kernel.org>
CC: <prime.zeng@...ilicon.com>, <fanghao11@...wei.com>, <linuxarm@...wei.com>
Subject: Re: [PATCH 1/2] soc cache: Add framework driver for HiSilicon SoC
cache
On 2025/1/8 3:05, Christophe JAILLET wrote:
> Le 07/01/2025 à 14:29, Yushan Wang a écrit :
>> From: Jie Wang <wangjie125@...wei.com>
>>
>> HiSilicon SoC cache is comprised of multiple hardware devices, a driver
>> in this patch is used to provide common utilities for other drivers to
>> avoid redundancy.
>
> ...
>
>> +static int hisi_soc_cache_lock(int cpu, phys_addr_t addr, size_t size)
>> +{
>> + struct hisi_soc_comp_inst *inst;
>> + struct list_head *head;
>> + int ret = -ENOMEM;
>> +
>> + guard(spinlock)(&soc_cache_devs[HISI_SOC_L3C].lock);
>> +
>> + /* Iterate L3C instances to perform operation, break loop once
>> found. */
>> + head = &soc_cache_devs[HISI_SOC_L3C].node;
>> + list_for_each_entry(inst, head, node) {
>> + if (!cpumask_test_cpu(cpu, &inst->comp->affinity_mask))
>> + continue;
>> + ret = inst->comp->ops->do_lock(inst->comp, addr, size);
>> + if (ret)
>> + return ret;
>> + break;
>> + }
>> +
>> + list_for_each_entry(inst, head, node) {
>
> Do we need to iterate another time.
> Isn't "inst" already correct?
>
> If so, I guess that:
> ret = inst->comp->ops->poll_lock_done(inst->comp, addr, size)
> if (ret)
> return ret;
>
> could be moved at the end the previous loop to both simplify the code,
> and save a few cycles.
Yes, will fix that in the next version.
>
>> + if (!cpumask_test_cpu(cpu, &inst->comp->affinity_mask))
>> + continue;
>> + ret = inst->comp->ops->poll_lock_done(inst->comp, addr, size);
>> + if (ret)
>> + return ret;
>> + break;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int hisi_soc_cache_unlock(int cpu, phys_addr_t addr)
>> +{
>> + struct hisi_soc_comp_inst *inst;
>> + struct list_head *head;
>> + int ret = 0;
>> +
>> + guard(spinlock)(&soc_cache_devs[HISI_SOC_L3C].lock);
>> +
>> + /* Iterate L3C instances to perform operation, break loop once
>> found. */
>> + head = &soc_cache_devs[HISI_SOC_L3C].node;
>> + list_for_each_entry(inst, head, node) {
>> + if (!cpumask_test_cpu(cpu, &inst->comp->affinity_mask))
>> + continue;
>> + ret = inst->comp->ops->do_unlock(inst->comp, addr);
>> + if (ret)
>> + return ret;
>> + break;
>> + }
>> +
>> + list_for_each_entry(inst, head, node) {
>
> Same as above.
Will fix here as well.
>
>> + if (!cpumask_test_cpu(cpu, &inst->comp->affinity_mask))
>> + continue;
>> + ret = inst->comp->ops->poll_unlock_done(inst->comp, addr);
>> + if (ret)
>> + return ret;
>> + break;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int hisi_soc_cache_inst_check(const struct hisi_soc_comp *comp,
>> + enum hisi_soc_comp_type comp_type)
>> +{
>> + struct hisi_soc_comp_ops *ops = comp->ops;
>> +
>> + /* Different types of component could have different ops. */
>> + switch (comp_type) {
>> + case HISI_SOC_L3C:
>> + if (!ops->do_lock || !ops->poll_lock_done
>> + || !ops->do_unlock || !ops->poll_unlock_done)
>
> I think that || should be at the end of the previous line.
> If I remember correctly checkpatch (maybe with --strict) complains
> about it.
Yes, run checkpatch with --strict will generate some check advises.
Will fix this in the next version.
Thanks!
Regards,
Yushan
>
>> + return -EINVAL;
>> + break;
>> + default:
>> + return -EINVAL;
>> + }
>> +
>> + return 0;
>> +}
>
> ...
>
> CJ
>
Powered by blists - more mailing lists