[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFgQCTsZ_ZYJf7BGQZwVtthRzySrvaA+tFjmB3TJDC-=idGXjg@mail.gmail.com>
Date: Wed, 20 Feb 2019 20:51:53 +0800
From: Pingfan Liu <kernelfans@...il.com>
To: Dave Young <dyoung@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>, Baoquan He <bhe@...hat.com>,
Jerry Hoemann <jerry.hoemann@....com>, x86@...nel.org,
Randy Dunlap <rdunlap@...radead.org>,
kexec@...ts.infradead.org, LKML <linux-kernel@...r.kernel.org>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Yinghai Lu <yinghai@...nel.org>, vgoyal@...hat.com,
iommu@...ts.linux-foundation.org, konrad.wilk@...cle.com,
Joerg Roedel <jroedel@...e.de>
Subject: Re: [PATCHv7] x86/kdump: bugfix, make the behavior of crashkernel=X
consistent with kaslr
On Wed, Feb 20, 2019 at 5:41 PM Dave Young <dyoung@...hat.com> wrote:
>
> On 02/20/19 at 09:32am, Borislav Petkov wrote:
> > On Mon, Feb 18, 2019 at 09:48:20AM +0800, Dave Young wrote:
> > > It is ideal if kernel can do it automatically, but I'm not sure if
> > > kernel can predict the swiotlb reserved size automatically.
> >
> > Do you see how even more absurd this gets?
> >
> > If the kernel cannot know the swiotlb reserved size automatically, how
> > is the normal user even supposed to know?!
> >
I think swiotlb is bounce-buffer, if we enlarge it, we can get better
performance. Default size should be enough for platform to work. But
in case of reserving low memory for crashkernel, things are different.
The reserve low memory = swiotlb_size_or_default() + DMA32 memory for
devices. And the 2nd item in the right of the equation varies, based
on machine type and dynamic payload
> > I see swiotlb_size_or_default() so we have a sane default which we fall
> > back to. Now where's the problem with that?
>
> Good question, I expect some answer from people who know more about the
> background. It would be good to have some actual test results, Pingfan
> is trying to do some tests.
>
Not following the idea, I do not think the following test result can
tell much. (We need various type of machine to get a final result.)
I do a quick test on "HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10",
command line "crashkernel=180M,high crashkernel=64M,low" can work for
the 2nd kernel. Although it complained some memory shortage issue:
[ 7.655591] fbcon: mgadrmfb (fb0) is primary device
[ 7.655639] Console: switching to colour frame buffer device 128x48
[ 7.660609] systemd-udevd: page allocation failure: order:0, mode:0x280d4
[ 7.660611] CPU: 0 PID: 180 Comm: systemd-udevd Not tainted
3.10.0-957.el7.x86_64 #1
[ 7.660612] Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380
Gen10, BIOS U30 06/20/2018
[ 7.660612] Call Trace:
[ 7.660621] [<ffffffff81761dc1>] dump_stack+0x19/0x1b
[ 7.660625] [<ffffffff811bc830>] warn_alloc_failed+0x110/0x180
[ 7.660628] [<ffffffff8175d3ce>] __alloc_pages_slowpath+0x6b6/0x724
[ 7.660631] [<ffffffff811c0e95>] __alloc_pages_nodemask+0x405/0x420
[ 7.660633] [<ffffffff8120dcf8>] alloc_pages_current+0x98/0x110
[ 7.660638] [<ffffffffc00c8622>] ttm_pool_populate+0x3d2/0x4b0 [ttm]
[ 7.660641] [<ffffffffc00bf1cd>] ttm_tt_populate+0x7d/0x90 [ttm]
[ 7.660644] [<ffffffffc00c3c74>] ttm_bo_kmap+0x124/0x240 [ttm]
[ 7.660648] [<ffffffff810cecbf>] ? __wake_up_sync_key+0x4f/0x60
[ 7.660650] [<ffffffffc012677e>] mga_dirty_update+0x25e/0x310 [mgag200]
[ 7.660653] [<ffffffffc012685f>] mga_imageblit+0x2f/0x40 [mgag200]
[ 7.660657] [<ffffffff813f97ca>] soft_cursor+0x1ba/0x260
[ 7.660659] [<ffffffff813f8f53>] bit_cursor+0x663/0x6a0
[ 7.660662] [<ffffffff81098739>] ? console_trylock+0x19/0x70
[ 7.660664] [<ffffffff813f514d>] fbcon_cursor+0x13d/0x1c0
[ 7.660665] [<ffffffff813f88f0>] ? bit_clear+0x120/0x120
[ 7.660668] [<ffffffff8146af2e>] hide_cursor+0x2e/0xa0
[ 7.660669] [<ffffffff8146d4e8>] redraw_screen+0x188/0x270
[ 7.660671] [<ffffffff8146e086>] do_bind_con_driver+0x316/0x340
[ 7.660672] [<ffffffff8146e5e9>] do_take_over_console+0x49/0x60
[ 7.660674] [<ffffffff813f24c3>] do_fbcon_takeover+0x63/0xd0
[ 7.660675] [<ffffffff813f808d>] fbcon_event_notify+0x61d/0x730
[ 7.660678] [<ffffffff8176fb0f>] notifier_call_chain+0x4f/0x70
[ 7.660681] [<ffffffff810c7f6d>] __blocking_notifier_call_chain+0x4d/0x70
[ 7.660683] [<ffffffff810c7fa6>] blocking_notifier_call_chain+0x16/0x20
[ 7.660684] [<ffffffff813e8b9b>] fb_notifier_call_chain+0x1b/0x20
[ 7.660686] [<ffffffff813e9e46>] register_framebuffer+0x1f6/0x340
[ 7.660690] [<ffffffffc01027e2>]
__drm_fb_helper_initial_config_and_unlock+0x252/0x3e0 [drm_kms_helper]
[ 7.660694] [<ffffffffc01029ae>]
drm_fb_helper_initial_config+0x3e/0x50 [drm_kms_helper]
[ 7.660697] [<ffffffffc01269d3>] mgag200_fbdev_init+0xe3/0x100 [mgag200]
[ 7.660699] [<ffffffffc01254f4>] mgag200_modeset_init+0x154/0x1d0 [mgag200]
[ 7.660701] [<ffffffffc012157d>] mgag200_driver_load+0x41d/0x5b0 [mgag200]
[ 7.660708] [<ffffffffc005ba4f>] drm_dev_register+0x15f/0x1f0 [drm]
[ 7.660711] [<ffffffff813c3518>] ? pci_enable_device_flags+0xe8/0x140
[ 7.660718] [<ffffffffc005d0da>] drm_get_pci_dev+0x8a/0x1a0 [drm]
[ 7.660720] [<ffffffffc012626b>] mga_pci_probe+0x9b/0xc0 [mgag200]
[ 7.660722] [<ffffffff813c4aca>] local_pci_probe+0x4a/0xb0
[ 7.660723] [<ffffffff813c6209>] pci_device_probe+0x109/0x160
[ 7.660726] [<ffffffff814a8285>] driver_probe_device+0xc5/0x3e0
[ 7.660727] [<ffffffff814a8683>] __driver_attach+0x93/0xa0
[ 7.660728] [<ffffffff814a85f0>] ? __device_attach+0x50/0x50
[ 7.660730] [<ffffffff814a5e25>] bus_for_each_dev+0x75/0xc0
[ 7.660731] [<ffffffff814a7bfe>] driver_attach+0x1e/0x20
[ 7.660733] [<ffffffff814a76a0>] bus_add_driver+0x200/0x2d0
[ 7.660734] [<ffffffff814a8d14>] driver_register+0x64/0xf0
[ 7.660735] [<ffffffff813c5a45>] __pci_register_driver+0xa5/0xc0
[ 7.660737] [<ffffffffc012d000>] ? 0xffffffffc012cfff
[ 7.660739] [<ffffffffc012d039>] mgag200_init+0x39/0x1000 [mgag200]
[ 7.660742] [<ffffffff8100210a>] do_one_initcall+0xba/0x240
[ 7.660745] [<ffffffff81118f8c>] load_module+0x272c/0x2bc0
[ 7.660748] [<ffffffff813a3030>] ? ddebug_proc_write+0x100/0x100
[ 7.660750] [<ffffffff8111950f>] SyS_init_module+0xef/0x140
[ 7.660752] [<ffffffff81774ddb>] system_call_fastpath+0x22/0x27
[ 7.660753] Mem-Info:
[ 7.660756] active_anon:3364 inactive_anon:6661 isolated_anon:0
[ 7.660756] active_file:0 inactive_file:0 isolated_file:0
[ 7.660756] unevictable:0 dirty:0 writeback:0 unstable:0
[ 7.660756] slab_reclaimable:1492 slab_unreclaimable:3116
[ 7.660756] mapped:1223 shmem:8449 pagetables:179 bounce:0
[ 7.660756] free:20626 free_pcp:0 free_cma:0
[ 7.660761] Node 0 DMA free:0kB min:4kB low:4kB high:4kB
active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:564kB
managed:448kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB
free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[ 7.660762] lowmem_reserve[]: 0 0 152 152
[ 7.660766] Node 0 DMA32 free:0kB min:0kB low:0kB high:0kB
active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:65536kB
managed:0kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB
free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[ 7.660767] lowmem_reserve[]: 0 0 152 152
[ 7.660771] Node 0 Normal free:82504kB min:1572kB low:1964kB
high:2356kB active_anon:13456kB inactive_anon:26644kB active_file:0kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB
isolated(file):0kB present:183740kB managed:158716kB mlocked:0kB
dirty:0kB writeback:0kB mapped:4892kB shmem:33796kB
slab_reclaimable:5968kB slab_unreclaimable:12464kB kernel_stack:784kB
pagetables:716kB unstable:0kB bounce:0kB free_pcp:[ 8.722693]
Microsemi PQI Driver (v1.1.4-115)
> Previously Joerg posted below patch, maybe he has some idea. Joerg?
>
> commit 94fb9334182284e8e7e4bcb9125c25dc33af19d4
> Author: Joerg Roedel <jroedel@...e.de>
> Date: Wed Jun 10 17:49:42 2015 +0200
>
> x86/crash: Allocate enough low memory when crashkernel=high
>
> Thanks
> Dave
Powered by blists - more mailing lists