lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 28 Dec 2018 01:31:11 +0000 From: "Wang, Kemi" <kemi.wang@...el.com> To: Waiman Long <longman@...hat.com>, Linus Torvalds <torvalds@...ux-foundation.org>, "vbabka@...e.cz" <vbabka@...e.cz>, "Davidlohr Bueso" <dave@...olabs.net> CC: "yang.shi@...ux.alibaba.com" <yang.shi@...ux.alibaba.com>, "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>, Matthew Wilcox <willy@...radead.org>, "mhocko@...nel.org" <mhocko@...nel.org>, Colin King <colin.king@...onical.com>, Andrew Morton <akpm@...ux-foundation.org>, "ldufour@...ux.vnet.ibm.com" <ldufour@...ux.vnet.ibm.com>, "lkp@...org" <lkp@...org>, "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>, "Wang, Kemi" <kemi.wang@...el.com> Subject: RE: [LKP] [mm] 9bc8039e71: will-it-scale.per_thread_ops -64.1% regression Hi, Waiman Did you post that patch? Let's see if it helps. -----Original Message----- From: LKP [mailto:lkp-bounces@...ts.01.org] On Behalf Of Waiman Long Sent: Tuesday, November 6, 2018 6:40 AM To: Linus Torvalds <torvalds@...ux-foundation.org>; vbabka@...e.cz; Davidlohr Bueso <dave@...olabs.net> Cc: yang.shi@...ux.alibaba.com; Linux Kernel Mailing List <linux-kernel@...r.kernel.org>; Matthew Wilcox <willy@...radead.org>; mhocko@...nel.org; Colin King <colin.king@...onical.com>; Andrew Morton <akpm@...ux-foundation.org>; ldufour@...ux.vnet.ibm.com; lkp@...org; kirill.shutemov@...ux.intel.com Subject: Re: [LKP] [mm] 9bc8039e71: will-it-scale.per_thread_ops -64.1% regression On 11/05/2018 05:14 PM, Linus Torvalds wrote: > On Mon, Nov 5, 2018 at 12:12 PM Vlastimil Babka <vbabka@...e.cz> wrote: >> I didn't spot an obvious mistake in the patch itself, so it looks >> like some bad interaction between scheduler and the mmap downgrade? > I'm thinking it's RWSEM_SPIN_ON_OWNER that ends up being confused by > the downgrade. > > It looks like the benchmark used to be basically CPU-bound, at about > 800% CPU, and now it's somewhere in the 200% CPU region: > > will-it-scale.time.percent_of_cpu_this_job_got > > 800 +-+-------------------------------------------------------------------+ > |.+.+.+.+.+.+.+. .+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+..+.+.+.+. .+.+.+.| > 700 +-+ +. + | > | | > 600 +-+ | > | | > 500 +-+ | > | | > 400 +-+ | > | | > 300 +-+ | > | | > 200 O-O O O O O O | > | O O O O O O O O O O O O O O O O O O | > 100 +-+-------------------------------------------------------------------+ > > which sounds like the downgrade really messes with the "spin waiting > for lock" logic. > > I'm thinking it's the "wake up waiter" logic that has some bad > interaction with spinning, and breaks that whole optimization. > > Adding Waiman and Davidlohr to the participants, because they seem to > be the obvious experts in this area. > > Linus Optimistic spinning on rwsem is done only on writers spinning on a writer-owned rwsem. If a write-lock is downgraded to a read-lock, all the spinning waiters will quit. That may explain the drop in cpu utilization. I do have a old patch that enable a certain amount of reader spinning which may help the situation. I can rebase that and send it out for review if people have interest. Cheers, Longman _______________________________________________ LKP mailing list LKP@...ts.01.org https://lists.01.org/mailman/listinfo/lkp
Powered by blists - more mailing lists