lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20201031061049.GA25495@intel.com> Date: Sat, 31 Oct 2020 14:10:49 +0800 From: Philip Li <philip.li@...el.com> To: Matthew Wilcox <willy@...radead.org> Cc: "Chen, Rong A" <rong.a.chen@...el.com>, Linus Torvalds <torvalds@...ux-foundation.org>, Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>, Alexey Dobriyan <adobriyan@...il.com>, Chris Wilson <chris@...is-wilson.co.uk>, Hugh Dickins <hughd@...gle.com>, Jani Nikula <jani.nikula@...ux.intel.com>, Matthew Auld <matthew.auld@...el.com>, William Kucharski <william.kucharski@...cle.com>, Qian Cai <cai@...hat.com>, LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org, lkp@...el.com, zhengjun.xing@...el.com Subject: Re: [LKP] Re: [mm] e6e88712e4: stress-ng.tmpfs.ops_per_sec -69.7% regression On Fri, Oct 30, 2020 at 02:58:35PM +0000, Matthew Wilcox wrote: > On Fri, Oct 30, 2020 at 10:02:45PM +0800, Chen, Rong A wrote: > > On 10/30/2020 9:17 PM, Matthew Wilcox wrote: > > > On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote: > > > > Details are as below: > > > > --------------------------------------------------------------------------------------------------> > > > > > > > > > > > > To reproduce: > > > > > > > > git clone https://github.com/intel/lkp-tests.git > > > > cd lkp-tests > > > > bin/lkp install job.yaml # job file is attached in this email > > > > bin/lkp run job.yaml > > > > > > Do you actually test these instructions before you send them out? > > > > > > hdd_partitions: "/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part > > > 1" > > > ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1" > > > rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1" > > > > > > That's _very_ specific to a given machine. I'm not familiar with > > > this test, so I don't know what I need to change. > > > > > > Hi Matthew, > > > > Sorry about that, I copied the job.yaml file from the server, > > the right way to do is to set your disk partitions in the yaml, > > please see https://github.com/intel/lkp-tests#run-your-own-disk-partitions. > > > > there is another reproduce script attached in the original mail > > for your reference. > > Can you reproduce this? Here's my results: thanks for quick check, we will provide update right after the weekend. Sorry for any inconvenience for the reproduction side so far. We need to improve this part. > > # stress-ng "--timeout" "100" "--times" "--verify" "--metrics-brief" "--sequential" "96" "--class" "memory" "--minimize" "--exclude" "spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib" > stress-ng: info: [7670] disabled 'oom-pipe' as it may hang or reboot the machine (enable it with the --pathological option) > stress-ng: info: [7670] dispatching hogs: 96 tmpfs > stress-ng: info: [7670] successful run completed in 100.23s (1 min, 40.23 secs) > stress-ng: info: [7670] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s > stress-ng: info: [7670] (secs) (secs) (secs) (real time) (usr+sys time) > stress-ng: info: [7670] tmpfs 8216 100.10 368.02 230.89 82.08 13.72 > stress-ng: info: [7670] for a 100.23s run time: > stress-ng: info: [7670] 601.38s available CPU time > stress-ng: info: [7670] 368.71s user time ( 61.31%) > stress-ng: info: [7670] 231.55s system time ( 38.50%) > stress-ng: info: [7670] 600.26s total time ( 99.81%) > stress-ng: info: [7670] load average: 78.32 27.87 10.10 > _______________________________________________ > LKP mailing list -- lkp@...ts.01.org > To unsubscribe send an email to lkp-leave@...ts.01.org
Powered by blists - more mailing lists