[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e690b3956c16045270b990e50bbf7e9d5352fd4b.camel@intel.com>
Date: Fri, 3 May 2019 21:48:48 +0000
From: "Verma, Vishal L" <vishal.l.verma@...el.com>
To: "pasha.tatashin@...een.com" <pasha.tatashin@...een.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jmorris@...ei.org" <jmorris@...ei.org>,
"sashal@...nel.org" <sashal@...nel.org>, "bp@...e.de" <bp@...e.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"david@...hat.com" <david@...hat.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"tiwai@...e.de" <tiwai@...e.de>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
"jglisse@...hat.com" <jglisse@...hat.com>,
"zwisler@...nel.org" <zwisler@...nel.org>,
"mhocko@...e.com" <mhocko@...e.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"Busch, Keith" <keith.busch@...el.com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"Huang, Ying" <ying.huang@...el.com>,
"Wu, Fengguang" <fengguang.wu@...el.com>,
"baiyaowei@...s.chinamobile.com" <baiyaowei@...s.chinamobile.com>
Subject: Re: [v5 0/3] "Hotremove" persistent memory
On Thu, 2019-05-02 at 18:36 -0400, Pavel Tatashin wrote:
> > Yes, here is the qemu config:
> >
> > qemu-system-x86_64
> > -machine accel=kvm
> > -machine pc-i440fx-2.6,accel=kvm,usb=off,vmport=off,dump-guest-core=off,nvdimm
> > -cpu Haswell-noTSX
> > -m 12G,slots=3,maxmem=44G
> > -realtime mlock=off
> > -smp 8,sockets=2,cores=4,threads=1
> > -numa node,nodeid=0,cpus=0-3,mem=6G
> > -numa node,nodeid=1,cpus=4-7,mem=6G
> > -numa node,nodeid=2
> > -numa node,nodeid=3
> > -drive file=/virt/fedora-test.qcow2,format=qcow2,if=none,id=drive-virtio-disk1
> > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=1
> > -object memory-backend-file,id=mem1,share,mem-path=/virt/nvdimm1,size=16G,align=128M
> > -device nvdimm,memdev=mem1,id=nv1,label-size=2M,node=2
> > -object memory-backend-file,id=mem2,share,mem-path=/virt/nvdimm2,size=16G,align=128M
> > -device nvdimm,memdev=mem2,id=nv2,label-size=2M,node=3
> > -serial stdio
> > -display none
> >
> > For the command list - I'm using WIP patches to ndctl/daxctl to add the
> > command I mentioned earlier. Using this command, I can reproduce the
> > lockdep issue. I thought I should be able to reproduce the issue by
> > onlining/offlining through sysfs directly too - something like:
> >
> > node="$(cat /sys/bus/dax/devices/dax0.0/target_node)"
> > for mem in /sys/devices/system/node/node"$node"/memory*; do
> > echo "offline" > $mem/state
> > done
> >
> > But with that I can't reproduce the problem.
> >
> > I'll try to dig a bit deeper into what might be happening, the daxctl
> > modifications simply amount to doing the same thing as above in C, so
> > I'm not immediately sure what might be happening.
> >
> > If you're interested, I can post the ndctl patches - maybe as an RFC -
> > to test with.
>
> I could apply the patches and test with them. Also, could you please
> send your kernel config.
>
Hi Pavel,
I've CC'd you on the patches mentioned above, and also pushed them to a
'kmem-pending' branch on github:
https://github.com/pmem/ndctl/tree/kmem-pending
After building ndctl from the above, you will want to run:
# daxctl reconfigure-device --mode=system-ram dax0.0
(this will also have onlined the memory sections)
# daxctl reconfigure-device --mode=devdax --attempt-offline dax0.0
(this triggers the lockdep warnings)
I've attached the kernel config here too (gzipped).
Thanks,
-Vishal
Download attachment "config.gz" of type "application/gzip" (30367 bytes)
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (3270 bytes)
Powered by blists - more mailing lists