lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Jul 2019 15:14:15 +0800
From:   "Huang\, Ying" <ying.huang@...el.com>
To:     Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Matthew Wilcox <willy@...radead.org>
Cc:     huang ying <huang.ying.caritas@...il.com>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        <linux-mm@...ck.org>
Subject: Re: kernel BUG at mm/swap_state.c:170!

Mikhail Gavrilov <mikhail.v.gavrilov@...il.com> writes:

> On Tue, 23 Jul 2019 at 10:08, Huang, Ying <ying.huang@...el.com> wrote:
>>
>> Thanks!  I have found another (easier way) to reproduce the panic.
>> Could you try the below patch on top of v5.2-rc2?  It can fix the panic
>> for me.
>>
>
> Thanks! Amazing work! The patch fixes the issue completely. The system
> worked at a high load of 16 hours without failures.

Thanks a lot for your help!

Hi, Matthew and Kirill,

I think we can fold this fix patch into your original patch and try
again.

> But still seems to me that page cache is being too actively crowded
> out with a lack of memory. Since, in addition to the top speed SSD on
> which the swap is located, there is also the slow HDD in the system
> that just starts to rustle continuously when swap being used. It would
> seem better to push some of the RAM onto a fast SSD into the swap
> partition than to leave the slow HDD without a cache.
>
> https://imgur.com/a/e8TIkBa
>
> But I am afraid it will be difficult to implement such an algorithm
> that analyzes the waiting time for the file I/O and waiting for paging
> (memory) and decides to leave parts in memory where the waiting time
> is more higher it would be more efficient for systems with several
> drives with access speeds can vary greatly. By waiting time I mean
> waiting time reading/writing to storage multiplied on the count of
> hits. Thus, we will not just keep in memory the most popular parts of
> the memory/disk, but also those parts of which read/write where was
> most costly.

Yes.  This is a valid problem.  I remember Johannes has a solution long
ago, but I don't know why he give up that.  Some information can be
found in the following URL.

https://lwn.net/Articles/690079/

Best Regards,
Huang, Ying

> --
> Best Regards,
> Mike Gavrilov.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ