lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Mar 2022 15:25:36 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Horatiu Vultur <horatiu.vultur@...rochip.com>
Cc:     <linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>,
        <davem@...emloft.net>, <michael@...le.cc>,
        <UNGLinuxDriver@...rochip.com>
Subject: Re: [PATCH net-next v2 3/4] net: lan966x: Add FDMA functionality

On Tue, 22 Mar 2022 22:04:02 +0100 Horatiu Vultur wrote:
> > > +static struct sk_buff *lan966x_fdma_rx_get_frame(struct lan966x_rx *rx)
> > > +{
> > > +     struct lan966x *lan966x = rx->lan966x;
> > > +     u64 src_port, timestamp;
> > > +     struct sk_buff *new_skb;
> > > +     struct lan966x_db *db;
> > > +     struct sk_buff *skb;
> > > +
> > > +     /* Check if there is any data */
> > > +     db = &rx->dcbs[rx->dcb_index].db[rx->db_index];
> > > +     if (unlikely(!(db->status & FDMA_DCB_STATUS_DONE)))
> > > +             return NULL;
> > > +
> > > +     /* Get the received frame and unmap it */
> > > +     skb = rx->skb[rx->dcb_index][rx->db_index];
> > > +     dma_unmap_single(lan966x->dev, (dma_addr_t)db->dataptr,
> > > +                      FDMA_DCB_STATUS_BLOCKL(db->status),
> > > +                      DMA_FROM_DEVICE);
> > > +
> > > +     /* Allocate a new skb and map it */
> > > +     new_skb = lan966x_fdma_rx_alloc_skb(rx, db);
> > > +     if (unlikely(!new_skb))
> > > +             return NULL;  
> > 
> > So how is memory pressure handled, exactly? Looks like it's handled
> > the same as if the ring was empty, so the IRQ is going to get re-raise
> > immediately, or never raised again?  
> 
> That is correct, the IRQ is going to get re-raised.
> But I am not sure that this is correct approach. Do you have any
> suggestions how it should be?

In my experience it's better to let the ring drain and have a service
task kick in some form of refill. Usually when machine is out of memory
last thing it needs is getting stormed by network IRQs. Some form of
back off would be good, at least?

> > > +     return counter;
> > > +}
> > > +
> > > +irqreturn_t lan966x_fdma_irq_handler(int irq, void *args)
> > > +{
> > > +     struct lan966x *lan966x = args;
> > > +     u32 db, err, err_type;
> > > +
> > > +     db = lan_rd(lan966x, FDMA_INTR_DB);
> > > +     err = lan_rd(lan966x, FDMA_INTR_ERR);  
> > 
> > Hm, IIUC you request a threaded IRQ for this. Why?
> > The register accesses can't sleep because you poke
> > them from napi_poll as well...  
> 
> Good point. What about the WARN?

which one? Did something generate a warning without the threaded IRQ?

> > > +int lan966x_fdma_xmit(struct sk_buff *skb, __be32 *ifh, struct net_device *dev)
> > > +{
> > > +     struct lan966x_port *port = netdev_priv(dev);
> > > +     struct lan966x *lan966x = port->lan966x;
> > > +     struct lan966x_tx_dcb_buf *next_dcb_buf;
> > > +     struct lan966x_tx_dcb *next_dcb, *dcb;
> > > +     struct lan966x_tx *tx = &lan966x->tx;
> > > +     struct lan966x_db *next_db;
> > > +     int needed_headroom;
> > > +     int needed_tailroom;
> > > +     dma_addr_t dma_addr;
> > > +     int next_to_use;
> > > +     int err;
> > > +
> > > +     /* Get next index */
> > > +     next_to_use = lan966x_fdma_get_next_dcb(tx);
> > > +     if (next_to_use < 0) {
> > > +             netif_stop_queue(dev);
> > > +             err = NETDEV_TX_BUSY;
> > > +             goto out;
> > > +     }
> > > +
> > > +     if (skb_put_padto(skb, ETH_ZLEN)) {
> > > +             dev->stats.tx_dropped++;  
> > 
> > It's preferred not to use the old dev->stats, but I guess you already
> > do so :( This is under some locks, right? No chance for another queue
> > or port to try to touch those stats at the same time?  
> 
> What is the preffered way of doing it?
> Yes, it is under a lock.

Drivers can put counters they need in their own structures and then
implement ndo_get_stats64 to copy it to the expected format.
If you have locks and there's no risk of races - I guess it's fine.
Unlikely we'll ever convert all the drivers, anyway.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ