[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEuU18fn8oC=DPNP3Dk=uE0Rutwib7jkoXEZXV+H4H6VcA@mail.gmail.com>
Date: Thu, 7 Dec 2023 12:19:00 +0800
From: Jason Wang <jasowang@...hat.com>
To: Heng Qi <hengqi@...ux.alibaba.com>
Cc: Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
virtualization@...ts.linux-foundation.org, mst@...hat.com, kuba@...nel.org,
yinjun.zhang@...igine.com, edumazet@...gle.com, davem@...emloft.net,
hawk@...nel.org, john.fastabend@...il.com, ast@...nel.org, horms@...nel.org,
xuanzhuo@...ux.alibaba.com
Subject: Re: [PATCH net-next v6 4/5] virtio-net: add spin lock for ctrl cmd access
On Wed, Dec 6, 2023 at 9:03 PM Heng Qi <hengqi@...ux.alibaba.com> wrote:
>
>
>
> 在 2023/12/6 下午8:27, Paolo Abeni 写道:
> > On Tue, 2023-12-05 at 19:05 +0800, Heng Qi wrote:
> >> 在 2023/12/5 下午4:35, Jason Wang 写道:
> >>> On Tue, Dec 5, 2023 at 4:02 PM Heng Qi <hengqi@...ux.alibaba.com> wrote:
> >>>> Currently access to ctrl cmd is globally protected via rtnl_lock and works
> >>>> fine. But if dim work's access to ctrl cmd also holds rtnl_lock, deadlock
> >>>> may occur due to cancel_work_sync for dim work.
> >>> Can you explain why?
> >> For example, during the bus unbind operation, the following call stack
> >> occurs:
> >> virtnet_remove -> unregister_netdev -> rtnl_lock[1] -> virtnet_close ->
> >> cancel_work_sync -> virtnet_rx_dim_work -> rtnl_lock[2] (deadlock occurs).
> >>
> >>>> Therefore, treating
> >>>> ctrl cmd as a separate protection object of the lock is the solution and
> >>>> the basis for the next patch.
> >>> Let's don't do that. Reasons are:
> >>>
> >>> 1) virtnet_send_command() may wait for cvq commands for an indefinite time
> >> Yes, I took that into consideration. But ndo_set_rx_mode's need for an
> >> atomic
> >> environment rules out the mutex lock.
> >>
> >>> 2) hold locks may complicate the future hardening works around cvq
> >> Agree, but I don't seem to have thought of a better way besides passing
> >> the lock.
> >> Do you have any other better ideas or suggestions?
> > What about:
> >
> > - using the rtnl lock only
> > - virtionet_close() invokes cancel_work(), without flushing the work
> > - virtnet_remove() calls flush_work() after unregister_netdev(),
> > outside the rtnl lock
> >
> > Should prevent both the deadlock and the UaF.
>
>
> Hi, Paolo and Jason!
>
> Thank you very much for your effective suggestions, but I found another
> solution[1],
> based on the ideas of rtnl_trylock and refill_work, which works very well:
>
> [1]
> +static void virtnet_rx_dim_work(struct work_struct *work)
> +{
> + struct dim *dim = container_of(work, struct dim, work);
> + struct receive_queue *rq = container_of(dim,
> + struct receive_queue, dim);
> + struct virtnet_info *vi = rq->vq->vdev->priv;
> + struct net_device *dev = vi->dev;
> + struct dim_cq_moder update_moder;
> + int i, qnum, err;
> +
> + if (!rtnl_trylock())
> + return;
Don't we need to reschedule here?
like
if (rq->dim_enabled)
sechedule_work()
?
Thanks
> +
> + for (i = 0; i < vi->curr_queue_pairs; i++) {
> + rq = &vi->rq[i];
> + dim = &rq->dim;
> + qnum = rq - vi->rq;
> +
> + if (!rq->dim_enabled)
> + continue;
> +
> + update_moder = net_dim_get_rx_moderation(dim->mode,
> dim->profile_ix);
> + if (update_moder.usec != rq->intr_coal.max_usecs ||
> + update_moder.pkts != rq->intr_coal.max_packets) {
> + err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum,
> + update_moder.usec,
> + update_moder.pkts);
> + if (err)
> + pr_debug("%s: Failed to send dim parameters on rxq%d\n",
> + dev->name, qnum);
> + dim->state = DIM_START_MEASURE;
> + }
> + }
> +
> + rtnl_unlock();
> +}
>
>
> In addition, other optimizations[2] have been tried, but it may be due
> to the sparsely
> scheduled work that the retry condition is always satisfied, affecting
> performance,
> so [1] is the final solution:
>
> [2]
>
> +static void virtnet_rx_dim_work(struct work_struct *work)
> +{
> + struct dim *dim = container_of(work, struct dim, work);
> + struct receive_queue *rq = container_of(dim,
> + struct receive_queue, dim);
> + struct virtnet_info *vi = rq->vq->vdev->priv;
> + struct net_device *dev = vi->dev;
> + struct dim_cq_moder update_moder;
> + int i, qnum, err, count;
> +
> + if (!rtnl_trylock())
> + return;
> +retry:
> + count = vi->curr_queue_pairs;
> + for (i = 0; i < vi->curr_queue_pairs; i++) {
> + rq = &vi->rq[i];
> + dim = &rq->dim;
> + qnum = rq - vi->rq;
> + update_moder = net_dim_get_rx_moderation(dim->mode,
> dim->profile_ix);
> + if (update_moder.usec != rq->intr_coal.max_usecs ||
> + update_moder.pkts != rq->intr_coal.max_packets) {
> + --count;
> + err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum,
> + update_moder.usec,
> + update_moder.pkts);
> + if (err)
> + pr_debug("%s: Failed to send dim parameters on rxq%d\n",
> + dev->name, qnum);
> + dim->state = DIM_START_MEASURE;
> + }
> + }
> +
> + if (need_resched()) {
> + rtnl_unlock();
> + schedule();
> + }
> +
> + if (count)
> + goto retry;
> +
> + rtnl_unlock();
> +}
>
> Thanks a lot!
>
> >
> > Side note: for this specific case any functional test with a
> > CONFIG_LOCKDEP enabled build should suffice to catch the deadlock
> > scenario above.
> >
> > Cheers,
> >
> > Paolo
>
Powered by blists - more mailing lists