[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20060824235859.f8840fb2.akpm@osdl.org>
Date: Thu, 24 Aug 2006 23:58:59 -0700
From: Andrew Morton <akpm@...l.org>
To: Evgeniy Polyakov <johnpol@....mipt.ru>
Cc: Christoph Hellwig <hch@...radead.org>,
lkml <linux-kernel@...r.kernel.org>,
David Miller <davem@...emloft.net>,
Ulrich Drepper <drepper@...hat.com>,
netdev <netdev@...r.kernel.org>,
Zach Brown <zach.brown@...cle.com>
Subject: Re: [take13 1/3] kevent: Core files.
On Fri, 25 Aug 2006 10:32:38 +0400
Evgeniy Polyakov <johnpol@....mipt.ru> wrote:
> On Thu, Aug 24, 2006 at 11:20:24PM -0700, Andrew Morton (akpm@...l.org) wrote:
> > On Fri, 25 Aug 2006 09:48:15 +0400
> > Evgeniy Polyakov <johnpol@....mipt.ru> wrote:
> >
> > > kmalloc is really slow actually - it always shows somewhere on top
> > > in profiles and brings noticeble overhead
> >
> > It shouldn't. Please describe the workload and send the profiles.
>
> epoll based trivial server (accept + sendfile for the same file, about
> 4k), httperf with big amount of simulateneous connections. 3c59x NIC
> (with e1000 there were no ioreads and netif_rx).
> __alloc_skb calls kmem_cache_alloc() and ___kmalloc().
>
> 16158 1.3681 ioread16
> 8073 0.6835 ioread32
> 3485 0.2951 irq_entries_start
> 3018 0.2555 _spin_lock
> 2103 0.1781 tcp_v4_rcv
> 1503 0.1273 sysenter_past_esp
> 1492 0.1263 netif_rx
> 1459 0.1235 skb_copy_bits
> 1422 0.1204 _spin_lock_irqsave
> 1145 0.0969 ip_route_input
> 983 0.0832 kmem_cache_free
> 964 0.0816 __alloc_skb
> 926 0.0784 common_interrupt
> 891 0.0754 __do_IRQ
> 846 0.0716 _read_lock
> 826 0.0699 __netif_rx_schedule
> 806 0.0682 __kmalloc
> 767 0.0649 do_tcp_sendpages
> 747 0.0632 __copy_to_user_ll
> 744 0.0630 pskb_expand_head
>
That doesn't look too bad.
What's that as a percentage of total user+system time?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists