lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Nov 2018 14:29:55 +0900
From:   Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
To:     Jason Wang <jasowang@...hat.com>,
        Prashant Bhole <bhole_prashant_q7@....ntt.co.jp>,
        "David S . Miller" <davem@...emloft.net>
Cc:     "Michael S . Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
        dsa@...ulusnetworks.com
Subject: Re: [PATCH net] tun: remove skb access after netif_receive_skb

On 2018/11/30 11:40, Jason Wang wrote:
> On 2018/11/30 上午10:30, Prashant Bhole wrote:
>> In tun.c skb->len was accessed while doing stats accounting after a
>> call to netif_receive_skb. We can not access skb after this call
>> because buffers may be dropped.
>>
>> The fix for this bug would be to store skb->len in local variable and
>> then use it after netif_receive_skb(). IMO using xdp data size for
>> accounting bytes will be better because input for tun_xdp_one() is
>> xdp_buff.
>>
>> Hence this patch:
>> - fixes a bug by removing skb access after netif_receive_skb()
>> - uses xdp data size for accounting bytes
...
>> Fixes: 043d222f93ab ("tuntap: accept an array of XDP buffs through
>> sendmsg()")
>> Reviewed-by: Toshiaki Makita <makita.toshiaki@....ntt.co.jp>
>> Signed-off-by: Prashant Bhole <bhole_prashant_q7@....ntt.co.jp>
>> ---
>>   drivers/net/tun.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
>> index e244f5d7512a..6e388792c0a8 100644
>> --- a/drivers/net/tun.c
>> +++ b/drivers/net/tun.c
>> @@ -2385,6 +2385,7 @@ static int tun_xdp_one(struct tun_struct *tun,
>>                  struct tun_file *tfile,
>>                  struct xdp_buff *xdp, int *flush)
>>   {
>> +    unsigned int datasize = xdp->data_end - xdp->data;
>>       struct tun_xdp_hdr *hdr = xdp->data_hard_start;
>>       struct virtio_net_hdr *gso = &hdr->gso;
>>       struct tun_pcpu_stats *stats;
>> @@ -2461,7 +2462,7 @@ static int tun_xdp_one(struct tun_struct *tun,
>>       stats = get_cpu_ptr(tun->pcpu_stats);
>>       u64_stats_update_begin(&stats->syncp);
>>       stats->rx_packets++;
>> -    stats->rx_bytes += skb->len;
>> +    stats->rx_bytes += datasize;
>>       u64_stats_update_end(&stats->syncp);
>>       put_cpu_ptr(stats);
>>   
> 
> 
> Good catch, but you probably need to calculate the datasize after XDP
> processing since it may modify the packet length.

(+CC David Ahern who may be interested in this area.)

I'd rather think we should calculate it before XDP.
I checked several drivers behavior. mlx5, bnxt and qede use hardware
counters for rx bytes, which means the size is calculated before XDP.
nfp calculate it by software but before XDP. On the other hand, intel
drivers use skb->len. So currently bytes counters do not look
consistent, but I think calculation before XDP is more common.

-- 
Toshiaki Makita

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ