[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1241003006.6554.322.camel@blade.ines.ro>
Date: Wed, 29 Apr 2009 14:03:26 +0300
From: Radu Rendec <radu.rendec@...s.ro>
To: Jesper Dangaard Brouer <hawk@...u.dk>
Cc: Jarek Poplawski <jarkao2@...il.com>,
Denys Fedoryschenko <denys@...p.net.lb>,
netdev <netdev@...r.kernel.org>
Subject: Re: htb parallelism on multi-core platforms
On Wed, 2009-04-29 at 12:31 +0200, Jesper Dangaard Brouer wrote:
> Just noticed that Jeremy Kerr has made some python scripts to make it even
> easier to use oprofile.
> See http://ozlabs.org/~jk/diary/tech/linux/hiprofile-v1.0.diary/
Thanks for the hint; I'll have a look at the scripts too.
> I would rather want to see the output from cls_u32.ko
>
> opreport --symbols -cl cls_u32.ko --image-path=/lib/modules/`uname -r`/kernel/
samples % image name symbol name
-------------------------------------------------------------------------------
38424 100.000 cls_u32.ko u32_classify
38424 100.000 cls_u32.ko u32_classify [self]
-------------------------------------------------------------------------------
Well, this doesn't tell us much more, but I think it's pretty obvious
what cls_u32 is doing :)
> > Am I misinterpreting the results, or does it look like the real problem
> > is actually packet classification?
>
> Yes, it looks like the problem is your u32 classification setup... Perhaps
> its not doing what you think its doing... didn't Jarek provide some hints
> for you to follow?
I've just realized that I might be hitting the worst-case bucket with
the (ip) destinations I chose for the test traffic. I'll try
I haven't tried tweaking htb_hysteresis yet (that was one of Jarek's
hints) - it's debatable that it would help since the real problem seems
to be in u32 (not htb), but I'll give it a try anyway.
Another hint was to make sure that "tc class add" goes before
corresponding "tc filter add" - checked: it's ok.
Another interesting hint came from Calin Velea, whose tests suggest that
the overall performance is better with napi turned off, since (rx)
interrupt work is distributed to all cpus/cores. I'll try to replicate
this as soon as I make some small changes to my test setup so that I'm
able to measure overall htb throughput on the egress nic (bps and pps).
Thanks,
Radu Rendec
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists