lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <133859123.129202.1432830817627.JavaMail.open-xchange@oxuslxltgw11.lxa.perfora.net>
Date:	Thu, 28 May 2015 12:33:37 -0400 (EDT)
From:	"jsullivan@...nsourcedevel.com" <jsullivan@...nsourcedevel.com>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	John Fastabend <john.fastabend@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: Drops in qdisc on ifb interface


> On May 28, 2015 at 12:26 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
> On Thu, 2015-05-28 at 08:45 -0700, John Fastabend wrote:
> > If your experimenting one thing you could do is create many
> > ifb devices and load balance across them from tc. I'm not
> > sure if this would be practical in your setup or not but might
> > be worth trying.
> >
> > One thing I've been debating adding is the ability to match
> > on current cpu_id in tc which would allow you to load balance by
> > cpu. I could send you a patch if you wanted to test it. I would
> > expect this to help somewhat with 'single queue' issue but sorry
> > haven't had time yet to test it out myself.
>
> It seems John uses a single 1Gbps flow, so only one cpu would receive
> NIC interrupts.
>
> The only way he could get better results would be to schedule IFB work
> on another core.
>
> (Assuming one cpu is 100% busy servicing NIC + IFB, but I really doubt
> it...)
>
>
>
Our initial testing has been single flow but the ultimate purpose is processing
real time video in a complex application which ingests associated meta data,
post to consumer facing cloud, does reporting back - so lots of different
traffics with very different demands - a perfect tc environment.

CPU utilization is remarkably light.  Every once in a while, we see a single CPU
about 50% utilized with si.  Thanks, all - John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ