[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201012011328.32937.thomas@fjellstrom.ca>
Date: Wed, 1 Dec 2010 13:28:32 -0700
From: Thomas Fjellstrom <thomas@...llstrom.ca>
To: Pekka Pietikainen <pp@...oulu.fi>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: low overhead packet capturing on linux
On December 1, 2010, Pekka Pietikainen wrote:
> On Tue, Nov 30, 2010 at 05:28:05PM -0700, Thomas Fjellstrom wrote:
> > I'm working on a little tool to monitor and measure bandwidth use on a vm
> > host, down to keeping track of all guest and host bandwidth, including,
> > eventually per layer7 protocol use.
> >
> > Right now I have a pretty simple setup, I setup an AF_PACKET socket,
> > select on it, and read data as it comes in. Obviously, this has a fatal
> > flaw. It takes up a rather large amount of cpu time just to capture the
> > packets. On a GbE interface, it uses up easily 60-80% cpu (on a 2.6Ghz
> > amd phenom II cpu core) just to capture the packets, trying to do
> > anything fancy with them will likely cause the kernel to drop some
> > packets.
> >
> > So what I'm looking for is a very low overhead way to capture packets.
> > I've come up with a few ideas, some of which I have no idea if they'd
> > even work.
>
> Have you checked out
>
> http://public.lanl.gov/cpw/ (IIRC it's actually a part of recent libpcap,
> but could be wrong) and http://www.ntop.org/PF_RING.html ?
Hi,
Thanks, yes, at least I've seen the cpw page, probably briefly looked at the
PF_RING stuff before. But I'll take a closer look this time, thanks :)
When I was looking before, I was unduly rejecting things that required
patching the kernel, or adding special drivers. But if it really can help I
might as well take a look.
--
Thomas Fjellstrom
thomas@...llstrom.ca
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists