[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080905113112.GA29926@2ka.mipt.ru>
Date: Fri, 5 Sep 2008 15:31:12 +0400
From: Evgeniy Polyakov <johnpol@....mipt.ru>
To: Johann Baudy <johaahn@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: Fwd: Packet mmap: TX RING and zero copy
Hi Johann.
On Fri, Sep 05, 2008 at 11:17:07AM +0200, Johann Baudy (johaahn@...il.com) wrote:
> > vmsplice() can be slow, try to inject header via usual send() call, or
> > better do not use it at all for testing.
> >
> vmsplice() is short in comparison to splice() ~ 200us !
> This was just to show you that even this vmpslice duration of 80us
> that is needed for each packet is too long to send only 1 packet.
> I really need a mechanism that allow sending of ~ 40 packets of 7200K
> in one system call to keep some cpu ressources to do other things.
> (Not spending time in kernel layers :))
Hmmm... splice()/sendfile() shuold be able to send the whole file in
single syscall. This looks like a problem in the userspace.
> > kill_fasync() also took too much time (top CPU user
> > is at bottom I suppose?), do you use SIGIO? Also vma traveling and page
> > checking is not what will be done in network code and your project, so
> > it also adds an overhead.
>
> Between kill_fasync() sys_gettimeofday() , I thought that we returned
> to user space.
> No SIGIO. But FYI, I use PREEMPT_RT patch.
Does it also push softirq processing into threads?
> >Please try without vmsplice() at all, usual
> > splice()/sendfile() _has_ to saturate the link, otherwise we have a
> > serious problem.
>
> I've already tried sendfile only with standard TCP/UDP socket. I've
> not saturated the link.
> Around same bitrate.
This worries me a lot: sendfile should be a single syscall which very
optimally creates network packets getting into account MTU and hardware
capabilities. I do belive it is a problem with userspace code.
--
Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists