lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Sep 2008 11:57:08 +0000
From:	Jarek Poplawski <>
To:	Badalian Vyacheslav <>
Subject: Re: Machine Check Exception Re: NetDev! Please help!

On Tue, Sep 23, 2008 at 02:36:05PM +0400, Badalian Vyacheslav wrote:
> > 2) Non-default qdiscs (any qdiscs added with tc): there is only one
> > root qdisc (with its tree) as before, dequeued to all tx queues (if
> > available). Since there is only one qdisc lock, and additional flag
> > preventing other processes to run the qdisc at the same time, there
> > is not so much advantage of SMP, except on tx locking. All previous
> > tc configs should work without changes (except sch_prio and sch_rr
> > used for multiqueuing, replaced by sch_multiq and act_skbedit now).
> > Probably in some cases adding sch_multiq to a tree for separating
> > qdisc queues per tx queues could be useful.
> >   
> Very thanks for detailed information!
> Yeh! Its sound great for me. I also can stress test this feature in our
> network if you will needed it.

Actually, I don't use these things too much, but I guess, you'll need
this more. Main issues were tested and fixed, but there could be always
some details not used, not noticed or not reported until you decide to
use this.

> Only i have 2 question ...
> 1. If kernel use default situation (no tc user create rules, simple
> autocreate by network card driver/module) its will normal work with
> traffic what must delivered "as is" (not shape)... like IPTV or other
> multicast/unicast video stream...
> If i understand logic
> we have 8 cpu/core and 4 TX queue and 4 RX... one cpu linked to 1
> TX/RX.... but if 1 cpu is burned by some process - this cpu will send
> its packet later when other cpu and packet is shape... but streams must
> go packet by packet to receive device... I understand that it simple
> need use hash function for TX queue, but if i understand - RX can't
> separate packets to different queues (its doing by hardware?) by hash
> function and packets may shape in rx stage because one CPU get it letter
> when nedded?
> I do not wish to spend your time... only say if it will work normal by
> default and i will sleep easy :)

Yes, I'm not sure I understand question, but I think you shouldn't
expect too much, at least in 2.6.27. There is many work now in drivers
around this multiqueing (and RX hashing), which should be available in
next kernels, but I'm not tracking this too much... Anyway, with the
basic support (which really isn't common for drivers in 2.6.27 yet),
this separation is done only for TX just before enqueuing.

> 2. I can locate module like sch_multiq at last 2.6.27-rc tree and not
> have information of it in google... I need to know only one thing - what
> params for hashing was planned for it?

sch_multiq doesn't use any params for hashing now - it uses mapping
in packets to separate them to different bands/queues. So, by default
it'll respect common hashing. You can change this using any filter with
act_skbedit (Documentation/networking/multiqueue.txt).

> Thanks and thanks again! And again sorry for my English.
Don't worry, English should understand...

Jarek P.
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists