[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d29d7816-a3e5-4f34-bb0c-dd427931efb4@redhat.com>
Date: Fri, 22 Nov 2024 10:30:29 +0100
From: David Hildenbrand <david@...hat.com>
To: Baoquan He <bhe@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-s390@...r.kernel.org, virtualization@...ts.linux.dev,
kvm@...r.kernel.org, linux-fsdevel@...r.kernel.org,
kexec@...ts.infradead.org, Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>, Alexander Gordeev
<agordeev@...ux.ibm.com>, Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, "Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio PĂ©rez <eperezma@...hat.com>,
Vivek Goyal <vgoyal@...hat.com>, Dave Young <dyoung@...hat.com>,
Thomas Huth <thuth@...hat.com>, Cornelia Huck <cohuck@...hat.com>,
Janosch Frank <frankja@...ux.ibm.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>, Eric Farman
<farman@...ux.ibm.com>, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v1 03/11] fs/proc/vmcore: disallow vmcore modifications
after the vmcore was opened
On 22.11.24 10:16, Baoquan He wrote:
> On 10/25/24 at 05:11pm, David Hildenbrand wrote:
> ......snip...
>> @@ -1482,6 +1470,10 @@ int vmcore_add_device_dump(struct vmcoredd_data *data)
>> return -EINVAL;
>> }
>>
>> + /* We'll recheck under lock later. */
>> + if (data_race(vmcore_opened))
>> + return -EBUSY;
>
Hi,
> As I commented to patch 7, if vmcore is opened and closed after
> checking, do we need to give up any chance to add device dumping
> as below?
>
> fd = open(/proc/vmcore);
> ...do checking;
> close(fd);
>
> quit any device dump adding;
>
> run makedumpfile on s390;
> ->fd = open(/proc/vmcore);
> -> try to dump;
> ->close(fd);
The only reasonable case where this could happen (with virtio_mem) would
be when you hotplug a virtio-mem device into a VM that is currently in
the kdump kernel. However, in this case, the device would not provide
any memory we want to dump:
(1) The memory was not available to the 1st (crashed) kernel, because
the device got hotplugged later.
(2) Hotplugged virtio-mem devices show up with "no plugged memory",
meaning there wouldn't be even something to dump.
Drivers will be loaded (as part of the kernel or as part of the initrd)
before any kdump action is happening. Similarly, just imagine your NIC
driver not being loaded when you start dumping to a network share ...
This should similarly apply to vmcoredd providers.
There is another concern I had at some point with changing the effective
/proc/vmcore size after someone already opened it, and might assume the
size will stay unmodified (IOW, the file was completely static before
vmcoredd showed up).
So unless there is a real use case that requires tracking whether the
file is no longer open, to support modifying the vmcore afterwards, we
should keep it simple.
I am not aware of any such cases, and my experiments with virtio_mem
showed that the driver get loaded extremely early from the initrd,
compared to when we actually start messing with /proc/vmcore from user
space.
Thanks!
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists