[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3083c357-9684-45d3-a9c7-2cd2912275a1@stanley.mountain>
Date: Thu, 3 Oct 2024 15:03:08 +0300
From: Dan Carpenter <dan.carpenter@...aro.org>
To: Waiman Long <longman@...hat.com>
Cc: Yu Kuai <yukuai3@...wei.com>, Tejun Heo <tj@...nel.org>,
Josef Bacik <josef@...icpanda.com>, Jens Axboe <axboe@...nel.dk>,
cgroups@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-janitors@...r.kernel.org,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v2] blk_iocost: remove some duplicate irq disable/enables
On Wed, Oct 02, 2024 at 02:40:52PM -0400, Waiman Long wrote:
>
> On 10/2/24 14:10, Dan Carpenter wrote:
> > On Wed, Oct 02, 2024 at 01:49:48PM -0400, Waiman Long wrote:
> > > > - spin_unlock_irq(&ioc->lock);
> > > > + spin_unlock(&ioc->lock);
> > > > return 0;
> > > > }
> > > I would suggest adding a "lockdep_assert_irqs_disabled()" call before
> > > spin_lock() to confirm that irq is indeed disabled just in case the callers
> > > are changed in the future.
> > It's really hard to predict future bugs. I doubt we'll add new callers.
> > Outputting this information to a struct seq_file *sf is pretty specific.
> >
> > If there were a bug related to this, then wouldn't it be caught by lockdep?
> >
> > The other idea is that we could catch bugs like this using static analysis.
> > Like every time we take the &ioc->lock, either IRQs should already be disabled
> > or we disable it ourselves. I could write a Smatch check like this.
> >
> > KTODO: add Smatch check to ensure IRQs are disabled for &ioc->lock
>
> This is just a suggestion and it is fine if you don't think it is necessary.
> The call can also serve as a comment that irq should have been disabled at
> this point.
I mean it's good to think about preventing future bugs. I just feel like when
it comes to adding asserts probably that's more useful when there are a lot of
call paths. Meanwhile if we add a static checker rule then we're probably going
to find bugs. Boom, maybe I've found one already?:
block/blk-iocost.c:3144 ioc_weight_write() warn: expected irq_disable for '&iocg->ioc->lock'
block/blk-iocost.c
3090 static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf,
3091 size_t nbytes, loff_t off)
3092 {
3093 struct blkcg *blkcg = css_to_blkcg(of_css(of));
3094 struct ioc_cgrp *iocc = blkcg_to_iocc(blkcg);
3095 struct blkg_conf_ctx ctx;
3096 struct ioc_now now;
3097 struct ioc_gq *iocg;
3098 u32 v;
3099 int ret;
3100
3101 if (!strchr(buf, ':')) {
3102 struct blkcg_gq *blkg;
3103
3104 if (!sscanf(buf, "default %u", &v) && !sscanf(buf, "%u", &v))
3105 return -EINVAL;
3106
3107 if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
3108 return -EINVAL;
3109
3110 spin_lock_irq(&blkcg->lock);
Here we disable IRQs.
3111 iocc->dfl_weight = v * WEIGHT_ONE;
3112 hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
3113 struct ioc_gq *iocg = blkg_to_iocg(blkg);
3114
3115 if (iocg) {
3116 spin_lock(&iocg->ioc->lock);
So this is fine.
3117 ioc_now(iocg->ioc, &now);
3118 weight_updated(iocg, &now);
3119 spin_unlock(&iocg->ioc->lock);
3120 }
3121 }
3122 spin_unlock_irq(&blkcg->lock);
3123
3124 return nbytes;
3125 }
3126
3127 blkg_conf_init(&ctx, buf);
3128
3129 ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
3130 if (ret)
3131 goto err;
3132
3133 iocg = blkg_to_iocg(ctx.blkg);
3134
3135 if (!strncmp(ctx.body, "default", 7)) {
3136 v = 0;
3137 } else {
3138 if (!sscanf(ctx.body, "%u", &v))
3139 goto einval;
3140 if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
3141 goto einval;
3142 }
3143
3144 spin_lock(&iocg->ioc->lock);
But why is this not spin_lock_irq()? I haven't analyzed this so maybe it's
fine.
3145 iocg->cfg_weight = v * WEIGHT_ONE;
3146 ioc_now(iocg->ioc, &now);
3147 weight_updated(iocg, &now);
3148 spin_unlock(&iocg->ioc->lock);
3149
3150 blkg_conf_exit(&ctx);
3151 return nbytes;
3152
3153 einval:
3154 ret = -EINVAL;
3155 err:
3156 blkg_conf_exit(&ctx);
3157 return ret;
3158 }
regards,
dan carpenter
Powered by blists - more mailing lists