lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 27 Aug 2014 16:20:06 -0700 From: Andrew Morton <akpm@...ux-foundation.org> To: Mike Travis <travis@....com> Cc: mingo@...hat.com, tglx@...utronix.de, hpa@...or.com, msalter@...hat.com, dyoung@...hat.com, riel@...hat.com, peterz@...radead.org, mgorman@...e.de, linux-kernel@...r.kernel.org, x86@...nel.org, linux-mm@...ck.org Subject: Re: [PATCH 0/2] x86: Speed up ioremap operations On Wed, 27 Aug 2014 16:15:28 -0700 Mike Travis <travis@....com> wrote: > > > > >> There are two causes for requiring a restart/reload of the drivers. > >> First is periodic preventive maintenance (PM) and the second is if > >> any of the devices experience a fatal error. Both of these trigger > >> this excessively long delay in bringing the system back up to full > >> capability. > >> > >> The problem was tracked down to a very slow IOREMAP operation and > >> the excessively long ioresource lookup to insure that the user is > >> not attempting to ioremap RAM. These patches provide a speed up > >> to that function. > > > > With what result? > > > > Early measurements on our in house lab system (with far fewer cpus > and memory) shows about a 60-75% increase. They have a 31 devices, > 3000+ cpus, 10+Tb of memory. We have 20 devices, 480 cpus, ~2Tb of > memory. I expect their ioresource list to be about 5-10 times longer. > [But their system is in production so we have to wait for the next > scheduled PM interval before a live test can be done.] So you expect 1+ hours? That's still nuts. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists