lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Jan 2008 01:04:59 +0300
From:	slavon@...telecom.ru
To:	slavon@...telecom.ru
Cc:	Jarek Poplawski <jarkao2@...il.com>,
	Patrick McHardy <kaber@...sh.net>, netdev@...r.kernel.org
Subject: Re: Packetlost when "tc qdisc del dev eth0 root"

Good night! =)

Sorry... i was wrong...
I see that problem more serious....

Lets see to scheme

Class 1
---qdisc
------- 10k classes
Class 2
---qdisc
------- 10k classes

All traffic go to class 2... class 1 qdisc not have packets and if we  
delete it - packets not lost... in theory... lets try delete class 1  
qdisc (all childrens delete too)...
PC freeze on 2-5 seconds... its not forward any traffic at this  
moment... its great tree lock?

Its normal or code need to more accurate lock?

Thanks!

> Quoting Jarek Poplawski <jarkao2@...il.com>:
>
>> Patrick McHardy wrote, On 01/15/2008 05:05 PM:
>>
>>> Badalian Vyacheslav wrote:
>>
>> ...
>>
>>> Yes, packets in the old qdisc are lost.
>>>
>>>> Maybe if tc do changes - need create second queue (hash of rules or how
>>>> you named it?) and do changes at it. Then replace old queue rules by
>>>> created new.
>>>> Logic -
>>>> 1. Do snapshot
>>>> 2. Do changes in shapshot
>>>> 3. All new packets go to snapshot
>>>> 4. If old queue not have packets - delete it.
>>>> 5. Snapshot its default.
>>>
>>>
>>> That doesn't really work since qdiscs keep internal state that
>>> in large parts depends on the packets queued. Take the qlen as
>>> a simple example, the new qdisc doesn't know about the packets
>>> in the old one and will exceed the limit.
>>
>> But, some similar alternative to killing packets 'to death' could
>> be imagined, I suppose (in the future, of course!). So, e.g. doing
>> the switch automatically after last packet has been dequeued (maybe
>> even with some 'special' function/mode for this). After all even
>> with accuracy lost, it could be less visible for clients than
>> current way?
>>
>> Regards,
>> Jarek P.
>
> Hmmm... i found way to fix this for me... but its not look good
>
> Scheme look like:
> Root - prio bands 3 priomap 0 0 0 0 ....
> --- Class 1
> --- Class 2
> -------- Copy of all table (Last this qdisc be root)
> --- Class 3
> -------- Copy of all table (Last this qdisc be root)
>
> 2. Add filter to root - flowid all packets to class 2
> 3. Delete qdisc at class 3
> 4. Create all table on class 3 (~20k qdiscs and 20k classes)
> 5. Replace filter on root - flowid all packets to class 3
> 6. If need update go to step 3, but use class 2
>
> All work good... and packets not dropeed =)
> But i have above 45 k classes and qdiscs.... After some time i will
> need patch to up max qdisc and classes more then 65k (> 0xfffe) =)))
> Also i have very bad TC commands performance then i have more then 10k rules.
>
> Thanks =)
>
> ----------------------------------------------------------------
> This message was sent using IMP, the Internet Messaging Program.



----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ