lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090703185414.GP23611@kernel.dk>
Date:	Fri, 3 Jul 2009 20:54:14 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Matthew Wilcox <matthew@....cx>, linux-kernel@...r.kernel.org,
	"Styner, Douglas W" <douglas.w.styner@...el.com>,
	Chinang Ma <chinang.ma@...el.com>,
	"Prickett, Terry O" <terry.o.prickett@...el.com>,
	Matthew Wilcox <matthew.r.wilcox@...el.com>,
	Eric.Moore@....com, DL-MPTFusionLinux@....com
Subject: Re: >10% performance degradation since 2.6.18

On Fri, Jul 03 2009, Andi Kleen wrote:
> 
> Matthew Wilcox <matthew@....cx> writes:
> >
> > ======oprofile CPU_CLK_UNHALTED for top 30 functions
> > Cycles% 2.6.18-92.el5-op           Cycles% 2.6.30
> > 70.1409 <database>                 67.0207 <database>
> > 1.3556 mpt_interrupt               1.7029 mpt_interrupt
> 
> It's strange that mpt_interrupt is that more costly in 2.6.30
> than in 2.6.18. I diffed 2.6.30's drivers/message/fusion/mptbase.c
> to a rhel 5.3s and they seem to be about the same. 
> 
> So why does it cost 0.5% more in 2.6.30?
> 
> [adding MPT maintainers]

Look at the irqs/sec rate, it's higher by about the same percentage. So
it's likely not a more costly irq handler, it's likely just called that
much more. It could be IO pattern, causing more commands to be issued
(which leads to more interrupts, etc).

> > 1.1622 __blockdev_direct_IO        1.1443 kmem_cache_alloc
> 
> It would be interesting to find out why kmem_cache_alloc
> is that more expensive. Either it is called more or the allocator
> is slower. Any chance of a callgraph profile run so we
> can see the callers?

Could be more IO as well, that hits the allocate often.

I agree with some callgraph data, that would at least eliminate the
guessing here. And some detailed IO statistics, amount of data
transferred as well as iostat info to see if the pattern is
significantly worse.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ