lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 26 Feb 2024 15:58:02 +0800
From: Su Yue <l@...enly.org>
To: Song Liu <song@...nel.org>
Cc: Benjamin Marzinski <bmarzins@...hat.com>, Yu Kuai
 <yukuai1@...weicloud.com>, mpatocka@...hat.com, heinzm@...hat.com,
 xni@...hat.com, blazej.kucman@...ux.intel.com, agk@...hat.com,
 snitzer@...nel.org, dm-devel@...ts.linux.dev, jbrassow@....redhat.com,
 neilb@...e.de, shli@...com, akpm@...l.org, linux-kernel@...r.kernel.org,
 linux-raid@...r.kernel.org, yi.zhang@...wei.com, yangerkun@...wei.com,
 "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v5 00/14] dm-raid/md/raid: fix v6.7 regressions


On Fri 09 Feb 2024 at 14:37, Song Liu <song@...nel.org> wrote:

> On Thu, Feb 8, 2024 at 3:17 PM Benjamin Marzinski 
> <bmarzins@...hat.com> wrote:
>>
> [...]
>> >
>> > I am not able to get reliable results from 
>> > shell/lvconvert-repair-raid.sh
>> > either. For 6.6.0 kernel, the test fails. On 6.8-rc1 kernel, 
>> > the test fails
>> > sometimes.
>> >
>> > Could you please share more information about your test 
>> > setup?
>> > Specifically:
>> > 1. Which tree/branch/tag are you testing?
>> > 2. What's the .config used in the tests?
>> > 3. How do you run the test suite? One test at a time, or all 
>> > of them
>> > together?
>> > 4. How do you handle "test passes sometimes" cases?
>>
>> So, I have been able to recreate the case where 
>> lvconvert-repair-raid.sh
>> keeps failing. It happens when I tried running the reproducer 
>> on a virtual
>> machine made using a cloud image, instead of one that I 
>> manually
>> installed. I'm not sure why there is a difference. But I can 
>> show you
>> how I can reliably recreate the errors I'm seeing.
>>
>>
>> Create a new Fedora 39 virtual machine with the following 
>> commands (I'm
>> not sure if it is possible to reproduce this on a machine using 
>> less
>> memory and cpus, but I can try that if you need me to. You 
>> probably also
>> want to pick a faster Fedora Mirror for the image location):
>> # virt-install --name repair-test --memory 8192 --vcpus 8 
>> --disk size=40
>> --graphics none --extra-args "console=ttyS0" --osinfo
>> detect=on,name=fedora-unknown --location
>> https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/x86_64/os/
>>
>
> virt-install doesn't work well in the my daily dev server. I 
> will try on a
> different machine.
>
>> Install to the whole virtual drive, using the default LVM 
>> partitioning.
>> Then ssh into the VM and run the following commands to setup 
>> the
>> lvm2-testsuite and 6.6.0 kernel:
>>
> [...]
>
>>
>> Rerun the lvm2-testsuite with the same commands as before:
>>
>> # mount -o remount,dev /tmp
>
> This mount trick helped me run tests without a full image (use
> CONFIG_9P_FS to reuse host file systems instead). Thanks!
>
>> # cd ~/lvm2
>> # make check T=lvconvert-repair-raid.sh
>>
>> This fails about 20% of the time, usually at either line 146 or 
>> 164. You
>> can check by running the following command when the test fails.
>
> However, I am seeing lvconvert-repair-raid.sh passes all the 
> time
> with both 6.6 kernel and 6.8+v5 patchset. My host system is
> CentOS 8.
>

shell/lvconvert-repair-raid.sh fails for SLES 15SP5 + upstream 
lvm2 +
v6.8+v5 patchset but not with v6.6 kernel.

--
Su

> I guess we will have to run more tests.
>
> DM folks, please also review the set. We won't be able to ship 
> the
> dm changes without your thorough reviews.
>
> Thanks,
> Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ