lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 09 Jul 2008 14:15:50 -0700
From:	Max Krasnyansky <maxk@...lcomm.com>
To:	Christian Borntraeger <borntraeger@...ibm.com>
CC:	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [PATCH] tun: Persistent devices can get stuck in xoff state

Christian Borntraeger wrote:
> Am Mittwoch, 9. Juli 2008 schrieb Max Krasnyansky:
>> The scenario goes like this. App stops reading from tun/tap.
>> TX queue gets full and driver does netif_stop_queue().
>> App closes fd and TX queue gets flushed as part of the cleanup.
>> Next time the app opens tun/tap and starts reading from it but
>> the xoff state is not cleared. We're stuck.
>> Normally xoff state is cleared when netdev is brought up. But
>> in the case of persistent devices this happens only during
>> initial setup.
> 
> Thats interesting. I believe we have seen exactly this behaviour
> with KVM and lots of preallocated tap devices. 
Yeah, it's interesting given that it's (the bug that is) been there for 
at least 2-3 years now :).

> [...]
>> +++ b/drivers/net/tun.c
>> @@ -576,6 +576,11 @@ static int tun_set_iff(struct file *file, struct ifreq 
> *ifr)
>>  	file->private_data = tun;
>>  	tun->attached = 1;
>>
>> +	/* Make sure persistent devices do not get stuck in
>> +	 * xoff state */
>> +	if (netif_running(tun->dev))
>> +		netif_wake_queue(tun->dev);
>> +
>>  	strcpy(ifr->ifr_name, tun->dev->name);
>>  	return 0;
> 
> I think that patch looks ok, but I am curious why you dont clear the xoff 
> state on application close at the same time when the TX queue gets flushed?
Why bother. I mean the packets will be dropped anyway.

Max

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ