lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZWgJIAYwfVvA+r8h@MiWiFi-R3L-srv>
Date:   Thu, 30 Nov 2023 12:01:36 +0800
From:   Baoquan He <bhe@...hat.com>
To:     Jiri Bohac <jbohac@...e.cz>
Cc:     Michal Hocko <mhocko@...e.com>, Pingfan Liu <piliu@...hat.com>,
        Tao Liu <ltao@...hat.com>, Vivek Goyal <vgoyal@...hat.com>,
        Dave Young <dyoung@...hat.com>, kexec@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] kdump: crashkernel reservation from CMA

On 11/29/23 at 11:51am, Jiri Bohac wrote:
> Hi Baoquan,
> 
> thanks for your interest...
> 
> On Wed, Nov 29, 2023 at 03:57:59PM +0800, Baoquan He wrote:
> > On 11/28/23 at 10:08am, Michal Hocko wrote:
> > > On Tue 28-11-23 10:11:31, Baoquan He wrote:
> > > > On 11/28/23 at 09:12am, Tao Liu wrote:
> > > [...]
> > > > Thanks for the effort to bring this up, Jiri.
> > > > 
> > > > I am wondering how you will use this crashkernel=,cma parameter. I mean
> > > > the scenario of crashkernel=,cma. Asking this because I don't know how
> > > > SUSE deploy kdump in SUSE distros. In SUSE distros, kdump kernel's
> > > > driver will be filter out? If latter case, It's possibly having the
> > > > on-flight DMA issue, e.g NIC has DMA buffer in the CMA area, but not
> > > > reset during kdump bootup because the NIC driver is not loaded in to
> > > > initialize. Not sure if this is 100%, possible in theory?
> 
> yes, we also only add the necessary drivers to the kdump initrd (using
> dracut --hostonly).
> 
> The plan was to use this feature by default only on systems where
> we are reasonably sure it is safe and let the user experiment
> with it when we're not sure.
> 
> I grepped a list of all calls to pin_user_pages*. From the 55,
> about one half uses FOLL_LONGTERM, so these should be migrated
> away from the CMA area. In the rest there are four cases that
> don't use the pages to set up DMA:
> 	mm/process_vm_access.c:		pinned_pages = pin_user_pages_remote(mm, pa, pinned_pages,
> 	net/rds/info.c:	ret = pin_user_pages_fast(start, nr_pages, FOLL_WRITE, pages);
> 	drivers/vhost/vhost.c:	r = pin_user_pages_fast(log, 1, FOLL_WRITE, &page);
> 	kernel/trace/trace_events_user.c:	ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT,
> 
> The remaining cases are potentially problematic:
> 	drivers/gpu/drm/i915/gem/i915_gem_userptr.c:		ret = pin_user_pages_fast(obj->userptr.ptr + pinned * PAGE_SIZE,
> 	drivers/iommu/iommufd/iova_bitmap.c:	ret = pin_user_pages_fast((unsigned long)addr, npages,
> 	drivers/iommu/iommufd/pages.c:	rc = pin_user_pages_remote(
> 	drivers/media/pci/ivtv/ivtv-udma.c:	err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count,
> 	drivers/media/pci/ivtv/ivtv-yuv.c:		uv_pages = pin_user_pages_unlocked(uv_dma.uaddr,
> 	drivers/media/pci/ivtv/ivtv-yuv.c:	y_pages = pin_user_pages_unlocked(y_dma.uaddr,
> 	drivers/misc/genwqe/card_utils.c:	rc = pin_user_pages_fast(data & PAGE_MASK, /* page aligned addr */
> 	drivers/misc/xilinx_sdfec.c:	res = pin_user_pages_fast((unsigned long)src_ptr, nr_pages, 0, pages);
> 	drivers/platform/goldfish/goldfish_pipe.c:	ret = pin_user_pages_fast(first_page, requested_pages,
> 	drivers/rapidio/devices/rio_mport_cdev.c:		pinned = pin_user_pages_fast(
> 	drivers/sbus/char/oradax.c:	ret = pin_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p);
> 	drivers/scsi/st.c:	res = pin_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0,
> 	drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c:		actual_pages = pin_user_pages_fast((unsigned long)ubuf & PAGE_MASK, num_pages,
> 	drivers/tee/tee_shm.c:		rc = pin_user_pages_fast(start, num_pages, FOLL_WRITE,
> 	drivers/vfio/vfio_iommu_spapr_tce.c:	if (pin_user_pages_fast(tce & PAGE_MASK, 1,
> 	drivers/video/fbdev/pvr2fb.c:	ret = pin_user_pages_fast((unsigned long)buf, nr_pages, FOLL_WRITE, pages);
> 	drivers/xen/gntdev.c:	ret = pin_user_pages_fast(addr, 1, batch->writeable ? FOLL_WRITE : 0, &page);
> 	drivers/xen/privcmd.c:		page_count = pin_user_pages_fast(
> 	fs/orangefs/orangefs-bufmap.c:	ret = pin_user_pages_fast((unsigned long)user_desc->ptr,
> 	arch/x86/kvm/svm/sev.c:	npinned = pin_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
> 	drivers/fpga/dfl-afu-dma-region.c:	pinned = pin_user_pages_fast(region->user_addr, npages, FOLL_WRITE,
> 	lib/iov_iter.c:	res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages);
> 
> We can easily check if some of these drivers (of which some we don't
> even ship/support) are loaded and decide this system is not safe
> for CMA crashkernel. Maybe looking at the list more thoroughly
> will show that even some of the above calls are acually safe,
> e.g. because the DMA is set up for reading only.
> lib/iov_iter.c seem like it could be the real
> problem since it's used by generic block layer...

Hmm, yeah. From my point of view, we may need make sure the safety of
reusing ,cma area in kdump kernel without exception. That we can use it
on system we 100% sure, let people to experiment with if if not sure,
seems to be not safe. Most of time, user even don't know how to judge 
the system they own is 100% safe, or the safety is not sure. That's too
hard.

> > > > The crashkernel=,cma requires no userspace data dumping, from our
> > > > support engineers' feedback, customer never express they don't need to
> > > > dump user space data. Assume a server with huge databse deployed, and
> > > > the database often collapsed recently and database provider claimed that
> > > > it's not database's fault, OS need prove their innocence. What will you
> > > > do?
> > > 
> > > Don't use CMA backed crash memory then? This is an optional feature.
> 
> Right. Our kdump does not dump userspace by default and we would
> of course make sure ,cma is not used when the user wanted to turn
> on userspace dumping.
> 
> > > Jiri will know better than me but for us a proper crash memory
> > > configuration has become a real nut. You do not want to reserve too much
> > > because it is effectively cutting of the usable memory and we regularly
> > > hit into "not enough memory" if we tried to be savvy. The more tight you
> > > try to configure the easier to fail that is. Even worse any in kernel
> > > memory consumer can increase its memory demand and get the overall
> > > consumption off the cliff. So this is not an easy to maintain solution.
> > > CMA backed crash memory can be much more generous while still usable.
> > 
> > Hmm, Redhat could go in a different way. We have been trying to:
> > 1) customize initrd for kdump kernel specifically, e.g exclude unneeded
> > devices's driver to save memory;
> 
> ditto
> 
> > 2) monitor device and kenrel memory usage if they begin to consume much
> > more memory than before. We have CI testing cases to watch this. We ever
> > found one NIC even eat up GB level memory, then this need be
> > investigated and fixed.
> > With these effort, our default crashkernel values satisfy most of cases,
> > surely not call cases. Only rare cases need be handled manually,
> > increasing crashkernel.
> 
> We get a lot of problems reported by partners testing kdump on
> their setups prior to release. But even if we tune the reserved
> size up, OOM is still the most common reason for kdump to fail
> when the product starts getting used in real life. It's been
> pretty frustrating for a long time.

I remember SUSE engineers ever told you will boot kernel and do an
estimation of kdump kernel usage, then set the crashkernel according to
the estimation. OOM will be triggered even that way is taken? Just
curious, not questioning the benefit of using ,cma to save memory.

> 
> > Wondering how you will use this crashkernel=,cma syntax. On normal
> > machines and virt guests, not much meomry is needed, usually 256M or a
> > little more is enough. On those high end systems with hundreds of Giga
> > bytes, even Tera bytes of memory, I don't think the saved memory with
> > crashkernel=,cma make much sense.
> 
> I feel the exact opposite about VMs. Reserving hundreds of MB for
> crash kernel on _every_ VM on a busy VM host wastes the most
> memory. VMs are often tuned to well defined task and can be set
> up with very little memory, so the ~256 MB can be a huge part of
> that. And while it's theoretically better to dump from the
> hypervisor, users still often prefer kdump because the hypervisor
> may not be under their control. Also, in a VM it should be much
> easier to be sure the machine is safe WRT the potential DMA
> corruption as it has less HW drivers. So I actually thought the
> CMA reservation could be most useful on VMs.

Hmm, we ever discussed this in upstream with David Hildend who works in
virt team. VMs problem is much easier to solve if they complain the
default crashkernel value is wasteful. The shrinking interface is for
them. The crashkernel value can't be enlarged, but shrinking existing
crashkernel memory is functioning smoothly well. They can adjust that in
script in a very simple way.

Anyway, let's discuss and figure out any risk of ,cma. If finally all
worries and concerns are proved unnecessary, then let's have a new great
feature. But we can't afford the risk if the ,cma area could be entangled
with 1st kernel's on-going action. As we know, not like kexec reboot, we
only shutdown CPUs, interrupt, most of devices are alive. And many of
them could be not reset and initialized in kdump kernel if the relevant
driver is not added in.

Earlier, we met several on-flight DMA stomping into memory when kexec
rebooting because some pci devices didn't provide shutdown() method. It
gave people so much headache to figure out and fix it. Simillarly for
kdump, we absolutely don't expect to see that happening with ,cma,
it absolutely will be a disaster to kdump, no matter how much memory it
can save. Because you don't know what happened, how to debug, until you
suspect this and turn it off.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ