lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+FuTSddjtB0sbUjE59CLqC1vXXnvsV9od5xD7sQkyE5cG7WFA@mail.gmail.com>
Date:   Tue, 27 Oct 2020 17:21:52 -0400
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     Loic Poulain <loic.poulain@...aro.org>
Cc:     Jakub Kicinski <kuba@...nel.org>,
        David Miller <davem@...emloft.net>,
        manivannan.sadhasivam@...aro.org, hemantk@...eaurora.org,
        Network Development <netdev@...r.kernel.org>,
        linux-arm-msm@...r.kernel.org, jhugo@...eaurora.org,
        bbhatt@...eaurora.org
Subject: Re: [PATCH v7 2/2] net: Add mhi-net driver

On Tue, Oct 27, 2020 at 4:33 AM Loic Poulain <loic.poulain@...aro.org> wrote:
>
> This patch adds a new network driver implementing MHI transport for
> network packets. Packets can be in any format, though QMAP (rmnet)
> is the usual protocol (flow control + PDN mux).
>
> It support two MHI devices, IP_HW0 which is, the path to the IPA
> (IP accelerator) on qcom modem, And IP_SW0 which is the software
> driven IP path (to modem CPU).
>
> Signed-off-by: Loic Poulain <loic.poulain@...aro.org>

> +static int mhi_ndo_xmit(struct sk_buff *skb, struct net_device *ndev)
> +{
> +       struct mhi_net_dev *mhi_netdev = netdev_priv(ndev);
> +       struct mhi_device *mdev = mhi_netdev->mdev;
> +       int err;
> +
> +       /* mhi_queue_skb is not thread-safe, but xmit is serialized by the
> +        * network core. Once MHI core will be thread save, migrate to

nit: thread-safe.

I also wonder whether you'd gain much from converting to
driver-internal locking, perhaps this comment is premature?

> +static void mhi_net_rx_refill_work(struct work_struct *work)
> +{
> +       struct mhi_net_dev *mhi_netdev = container_of(work, struct mhi_net_dev,
> +                                                     rx_refill.work);
> +       struct net_device *ndev = mhi_netdev->ndev;
> +       struct mhi_device *mdev = mhi_netdev->mdev;
> +       int size = READ_ONCE(ndev->mtu);
> +       struct sk_buff *skb;
> +       int err;
> +
> +       do {
> +               skb = netdev_alloc_skb(ndev, size);
> +               if (unlikely(!skb))
> +                       break;
> +
> +               err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb, size, MHI_EOT);
> +               if (err) {
> +                       if (unlikely(err != -ENOMEM))
> +                               net_err_ratelimited("%s: Failed to queue RX buf (%d)\n",
> +                                                   ndev->name, err);
> +                       kfree_skb(skb);
> +                       break;
> +               }
> +
> +               atomic_inc(&mhi_netdev->stats.rx_queued);
> +
> +               /* Do not hog the CPU if rx buffers are completed faster than
> +                * queued (unlikely).
> +                */
> +               cond_resched();
> +       } while (1);

This function has to allocate + kfree_skb one too many skbs to break
out of the loop? Can it not observe the queue full based on rx_queued?

> +
> +       /* If we're still starved of rx buffers, reschedule latter */

nit: later

> +       if (unlikely(!atomic_read(&mhi_netdev->stats.rx_queued)))
> +               schedule_delayed_work(&mhi_netdev->rx_refill, HZ / 2);

what is the rationale for HZ / 2?

> +}
> +
> +static int mhi_net_probe(struct mhi_device *mhi_dev,
> +                        const struct mhi_device_id *id)
> +{
> +       const char *netname = (char *)id->driver_data;
> +       struct mhi_net_dev *mhi_netdev;
> +       struct net_device *ndev;
> +       struct device *dev = &mhi_dev->dev;
> +       int err;
> +
> +       ndev = alloc_netdev(sizeof(*mhi_netdev), netname, NET_NAME_PREDICTABLE,
> +                           mhi_net_setup);
> +       if (!ndev)
> +               return -ENOMEM;
> +
> +       mhi_netdev = netdev_priv(ndev);
> +       dev_set_drvdata(dev, mhi_netdev);
> +       mhi_netdev->ndev = ndev;
> +       mhi_netdev->mdev = mhi_dev;
> +       SET_NETDEV_DEV(ndev, &mhi_dev->dev);
> +
> +       /* All MHI net channels have 128 ring elements (at least for now) */
> +       mhi_netdev->rx_queue_sz = 128;
> +
> +       INIT_DELAYED_WORK(&mhi_netdev->rx_refill, mhi_net_rx_refill_work);
> +       u64_stats_init(&mhi_netdev->stats.rx_syncp);
> +       u64_stats_init(&mhi_netdev->stats.tx_syncp);
> +
> +       /* Start MHI channels */
> +       err = mhi_prepare_for_transfer(mhi_dev);
> +       if (err)
> +               goto out_err;
> +
> +       err = register_netdev(ndev);
> +       if (err) {
> +               mhi_unprepare_from_transfer(mhi_dev);
> +               goto out_err;

It's a bit odd to combine error branches with jump labels. I'd add
out_register_err: below
> +       }
> +
> +       return 0;
> +
> +out_err:
> +       free_netdev(ndev);
> +       return err;
> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ