lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 27 Mar 2024 20:00:00 +0800
From: Heng Qi <hengqi@...ux.alibaba.com>
To: Dan Jurgens <danielj@...dia.com>
Cc: "mst@...hat.com" <mst@...hat.com>,
 "jasowang@...hat.com" <jasowang@...hat.com>,
 "xuanzhuo@...ux.alibaba.com" <xuanzhuo@...ux.alibaba.com>,
 "virtualization@...ts.linux.dev" <virtualization@...ts.linux.dev>,
 "davem@...emloft.net" <davem@...emloft.net>,
 "edumazet@...gle.com" <edumazet@...gle.com>,
 "kuba@...nel.org" <kuba@...nel.org>, "pabeni@...hat.com"
 <pabeni@...hat.com>, Jiri Pirko <jiri@...dia.com>,
 "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 4/4] virtio_net: Remove rtnl lock protection of
 command buffers



在 2024/3/27 上午10:10, Heng Qi 写道:
>
>
> 在 2024/3/26 下午11:18, Dan Jurgens 写道:
>>> From: Heng Qi <hengqi@...ux.alibaba.com>
>>> Sent: Tuesday, March 26, 2024 3:55 AM
>>> To: Dan Jurgens <danielj@...dia.com>; netdev@...r.kernel.org
>>> Cc: mst@...hat.com; jasowang@...hat.com; xuanzhuo@...ux.alibaba.com;
>>> virtualization@...ts.linux.dev; davem@...emloft.net;
>>> edumazet@...gle.com; kuba@...nel.org; pabeni@...hat.com; Jiri Pirko
>>> <jiri@...dia.com>
>>> Subject: Re: [PATCH net-next 4/4] virtio_net: Remove rtnl lock 
>>> protection of
>>> command buffers
>>>
>>>
>>>
>>> 在 2024/3/26 上午5:49, Daniel Jurgens 写道:
>>>> The rtnl lock is no longer needed to protect the control buffer and
>>>> command VQ.
>>>>
>>>> Signed-off-by: Daniel Jurgens <danielj@...dia.com>
>>>> Reviewed-by: Jiri Pirko <jiri@...dia.com>
>>>> ---
>>>>    drivers/net/virtio_net.c | 27 +++++----------------------
>>>>    1 file changed, 5 insertions(+), 22 deletions(-)
>>>>
>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index
>>>> 41f8dc16ff38..d09ea20b16be 100644
>>>> --- a/drivers/net/virtio_net.c
>>>> +++ b/drivers/net/virtio_net.c
>>>> @@ -2639,14 +2639,12 @@ static void virtnet_stats(struct net_device
>>>> *dev,
>>>>
>>>>    static void virtnet_ack_link_announce(struct virtnet_info *vi)
>>>>    {
>>>> -    rtnl_lock();
>>>>        if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_ANNOUNCE,
>>>>                      VIRTIO_NET_CTRL_ANNOUNCE_ACK,
>>> NULL))
>>>> dev_warn(&vi->dev->dev, "Failed to ack link announce.\n");
>>>> -    rtnl_unlock();
>>>>    }
>>>>
>>>> -static int _virtnet_set_queues(struct virtnet_info *vi, u16
>>>> queue_pairs)
>>>> +static int virtnet_set_queues(struct virtnet_info *vi, u16
>>>> +queue_pairs)
>>>>    {
>>>>        struct virtio_net_ctrl_mq *mq __free(kfree) = NULL;
>>>>        struct scatterlist sg;
>>>> @@ -2677,16 +2675,6 @@ static int _virtnet_set_queues(struct
>>> virtnet_info *vi, u16 queue_pairs)
>>>>        return 0;
>>>>    }
>>>>
>>>> -static int virtnet_set_queues(struct virtnet_info *vi, u16
>>>> queue_pairs) -{
>>>> -    int err;
>>>> -
>>>> -    rtnl_lock();
>>>> -    err = _virtnet_set_queues(vi, queue_pairs);
>>>> -    rtnl_unlock();
>>>> -    return err;
>>>> -}
>>>> -
>>>>    static int virtnet_close(struct net_device *dev)
>>>>    {
>>>>        struct virtnet_info *vi = netdev_priv(dev); @@ -3268,7 
>>>> +3256,7 @@
>>>> static int virtnet_set_channels(struct net_device *dev,
>>>>            return -EINVAL;
>>>>
>>>>        cpus_read_lock();
>>>> -    err = _virtnet_set_queues(vi, queue_pairs);
>>>> +    err = virtnet_set_queues(vi, queue_pairs);
>>>>        if (err) {
>>>>            cpus_read_unlock();
>>>>            goto err;
>>>> @@ -3558,14 +3546,11 @@ static void virtnet_rx_dim_work(struct
>>> work_struct *work)
>>>>        struct dim_cq_moder update_moder;
>>>>        int i, qnum, err;
>>>>
>>>> -    if (!rtnl_trylock())
>>>> -        return;
>>>> -
>>> Does this guarantee that the synchronization is completely correct?
>>>
>>> The purpose of this patch set is to add a separate lock for ctrlq 
>>> rather than
>>> reusing the RTNL lock.
>>> But for dim workers, it not only involves the use of ctrlq, but also 
>>> involves
>>> reading shared variables in interfaces such as .set_coalesce, 
>>> .get_coalesce,
>>> etc.
>> It looks like there is a risk of a dirty read in the get (usecs 
>> updated, but not max_packets).
>
> Also dim_enabled.
>
> And later I need to asynchronousize the dim cmds, which means that
> different dim workers will operate a shared linked list.
>
> So we need a lock.

After removing the loop, maybe READ_ONCE/WRITE_ONCE will be enough?

>
>>   In the set it will return -EINVAL if trying to adjust the settings 
>> aside from DIM enabled.  I can add a lock for this if you think it's 
>> needed, but it doesn't seem like a major problem for debug info.
>
> Not just for debug info, but future extensions as well.
>
> These desynchronizations can introduce more trouble in the future.
>
> Regards,
> Heng
>
>>
>>
>>> In addition, assuming there are 10 queues, each queue is scheduled 
>>> with its
>>> own dim worker at the same time, then these 10 workers may issue
>>> parameters to rxq0 10 times in parallel, just because the RTNL lock is
>>> removed here.
>>>
>>> Therefore, when the RTNL lock is removed, a 'for loop' is no longer 
>>> needed in
>>> virtnet_rx_dim_work, and the dim worker of each queue only 
>>> configures its
>>> own parameters.
>>>
>> Good point. I'll add a new patch to remove the for loop.
>>
>>> Alternatively, please keep RTNL lock here.
>>>
>>> Regards,
>>> Heng
>>>
>>>>        /* Each rxq's work is queued by "net_dim()->schedule_work()"
>>>>         * in response to NAPI traffic changes. Note that 
>>>> dim->profile_ix
>>>>         * for each rxq is updated prior to the queuing action.
>>>>         * So we only need to traverse and update profiles for all rxqs
>>>> -     * in the work which is holding rtnl_lock.
>>>> +     * in the work.
>>>>         */
>>>>        for (i = 0; i < vi->curr_queue_pairs; i++) {
>>>>            rq = &vi->rq[i];
>>>> @@ -3587,8 +3572,6 @@ static void virtnet_rx_dim_work(struct
>>> work_struct *work)
>>>>                dim->state = DIM_START_MEASURE;
>>>>            }
>>>>        }
>>>> -
>>>> -    rtnl_unlock();
>>>>    }
>>>>
>>>>    static int virtnet_coal_params_supported(struct ethtool_coalesce
>>>> *ec) @@ -4036,7 +4019,7 @@ static int virtnet_xdp_set(struct 
>>>> net_device
>>> *dev, struct bpf_prog *prog,
>>>>            synchronize_net();
>>>>        }
>>>>
>>>> -    err = _virtnet_set_queues(vi, curr_qp + xdp_qp);
>>>> +    err = virtnet_set_queues(vi, curr_qp + xdp_qp);
>>>>        if (err)
>>>>            goto err;
>>>>        netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp); @@ -
>>> 4852,7
>>>> +4835,7 @@ static int virtnet_probe(struct virtio_device *vdev)
>>>>
>>>>        virtio_device_ready(vdev);
>>>>
>>>> -    _virtnet_set_queues(vi, vi->curr_queue_pairs);
>>>> +    virtnet_set_queues(vi, vi->curr_queue_pairs);
>>>>
>>>>        /* a random MAC address has been assigned, notify the device.
>>>>         * We don't fail probe if VIRTIO_NET_F_CTRL_MAC_ADDR is not
>>> there
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ