lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d54e8489-6ba0-2ad9-4eb5-c8d3f847f34a@gmail.com>
Date:   Fri, 10 Aug 2018 10:36:00 -0700
From:   Florian Fainelli <f.fainelli@...il.com>
To:     "Lad, Prabhakar" <prabhakar.csengg@...il.com>,
        Andrew Lunn <andrew@...n.ch>
Cc:     netdev <netdev@...r.kernel.org>
Subject: Re: [Query]: DSA Understanding

On 08/10/2018 04:26 AM, Lad, Prabhakar wrote:
> Hi Andrew,
> 
> On Thu, Aug 9, 2018 at 6:23 PM Andrew Lunn <andrew@...n.ch> wrote:
>>
>>> Its coming from the switch lan4 I have attached the png, where
>>> C4:F3:12:08:FE:7F is
>>> the mac of lan4, which is broadcast to ff:ff:ff:ff:ff:ff, which is
>>> causing rx counter on
>>> PC to go up.
>>
>> So, big packets are making it from the switch to the PC. But the small
>> ARP packets are not.
>>
>> This is what Florian was suggesting.
>>
>> ARP packets are smaller than 64 bytes, which is the minimum packet
>> size for Ethernet. Any packets smaller than 64 bytes are called runt
>> packets. They have to be padded upto 64 bytes in order to make them
>> valid. Otherwise the destination, or any switch along the path, might
>> throw them away.
>>
>> What could be happening is that the CSPW driver or hardware is padding
>> the packet to 64 bytes. But that packet has a DSA header in it. The
>> switch removes the header, recalculate the checksum and sends the
>> packet. It is now either 4 or 8 bytes smaller, depending on what DSA
>> header was used. It then becomes a runt packet.
>>
> Thank you for the clarification, this really helped me out.
> 
>> Florian had to fix this problem recently.
>>
>> http://patchwork.ozlabs.org/patch/836534/
>>
> But seems like this patch was never accepted, instead
> brcm_tag_xmit_ll() does it if I am understanding it correctly.
> similarly to this ksz_xmit() is taking care of padding.

net/dsa/tag_brcm.c ended up doing the padding because that was a more
generic and central location:

https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/net/dsa/tag_brcm.c#n73

> 
>> You probably need something similar for the cpsw.
>>
> looking at the tag_ksz.c in xmit function this is taken care of

I agree, this should be padding packets correctly, can you still
instrument cpsw to make sure that what comes to its ndo_start_xmit() is
ETH_ZLEN + tag_len or more?

> 
> /* For Ingress (Host -> KSZ), 2 bytes are added before FCS.
>  * ---------------------------------------------------------------------------
>  * DA(6bytes)|SA(6bytes)|....|Data(nbytes)|tag0(1byte)|tag1(1byte)|FCS(4bytes)
>  * ---------------------------------------------------------------------------
>  * tag0 : Prioritization (not used now)
>  * tag1 : each bit represents port (eg, 0x01=port1, 0x02=port2, 0x10=port5)
>  *
>  * For Egress (KSZ -> Host), 1 byte is added before FCS.
>  * ---------------------------------------------------------------------------
>  * DA(6bytes)|SA(6bytes)|....|Data(nbytes)|tag0(1byte)|FCS(4bytes)
>  * ---------------------------------------------------------------------------
>  * tag0 : zero-based value represents port
>  *      (eg, 0x00=port1, 0x02=port3, 0x06=port7)
>  */
> 
> #define    KSZ_INGRESS_TAG_LEN    2
> #define    KSZ_EGRESS_TAG_LEN    1
> 
> static struct sk_buff *ksz_xmit(struct sk_buff *skb, struct net_device *dev)
> {
>     struct dsa_slave_priv *p = netdev_priv(dev);
>     struct sk_buff *nskb;
>     int padlen;
>     u8 *tag;
> 
>     padlen = (skb->len >= ETH_ZLEN) ? 0 : ETH_ZLEN - skb->len;
> 
>     if (skb_tailroom(skb) >= padlen + KSZ_INGRESS_TAG_LEN) {
>         /* Let dsa_slave_xmit() free skb */
>         if (__skb_put_padto(skb, skb->len + padlen, false))
>             return NULL;
> 
>         nskb = skb;
>     } else {
>         nskb = alloc_skb(NET_IP_ALIGN + skb->len +
>                  padlen + KSZ_INGRESS_TAG_LEN, GFP_ATOMIC);
>         if (!nskb)
>             return NULL;
>         skb_reserve(nskb, NET_IP_ALIGN);
> 
>         skb_reset_mac_header(nskb);
>         skb_set_network_header(nskb,
>                        skb_network_header(skb) - skb->head);
>         skb_set_transport_header(nskb,
>                      skb_transport_header(skb) - skb->head);
>         skb_copy_and_csum_dev(skb, skb_put(nskb, skb->len));
> 
>         /* Let skb_put_padto() free nskb, and let dsa_slave_xmit() free
>          * skb
>          */
>         if (skb_put_padto(nskb, nskb->len + padlen))
>             return NULL;
> 
>         consume_skb(skb);
>     }
> 
>     tag = skb_put(nskb, KSZ_INGRESS_TAG_LEN);
>     tag[0] = 0;
>     tag[1] = 1 << p->dp->index; /* destination port */
> 
>     return nskb;
> }
> 
> Cheers,
> --Prabhakar Lad
> 


-- 
Florian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ