lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 27 Nov 2016 22:19:10 +0900
From:   Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        Takashi Iwai <tiwai@...e.de>,
        Hyeoncheol Lee <cheol.lee@....com>, yjay.kim@....com,
        Sangseok Lee <sangseok.lee@....com>,
        Hugh Dickins <hughd@...gle.com>, linux-mm@...ck.org,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        stable@...r.kernel.org,
        Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [PATCH v3 1/3] mm: support anonymous stable page

Hi,

On (11/25/16 17:35), Minchan Kim wrote:
[..]
> Unfortunately, zram has used per-cpu stream feature from v4.7.
> It aims for increasing cache hit ratio of scratch buffer for
> compressing. Downside of that approach is that zram should ask
> memory space for compressed page in per-cpu context which requires
> stricted gfp flag which could be failed. If so, it retries to
> allocate memory space out of per-cpu context so it could get memory
> this time and compress the data again, copies it to the memory space.
> 
> In this scenario, zram assumes the data should never be changed
> but it is not true unless stable page supports. So, If the data is
> changed under us, zram can make buffer overrun because second
> compression size could be bigger than one we got in previous trial
> and blindly, copy bigger size object to smaller buffer which is
> buffer overrun. The overrun breaks zsmalloc free object chaining
> so system goes crash like above.

very interesting find! didn't see this coming.

> Unfortunately, reuse_swap_page should be atomic so that we cannot wait on
> writeback in there so the approach in this patch is simply return false if
> we found it needs stable page.  Although it increases memory footprint
> temporarily, it happens rarely and it should be reclaimed easily althoug
> it happened.  Also, It would be better than waiting of IO completion,
> which is critial path for application latency.

wondering - how many pages can it hold? we are in low memory, that's why we
failed to zsmalloc in fast path, so how likely this to worsen memory pressure?
just asking. in async zram the window between zram_rw_page() and actual
write of a page even bigger, isn't it?

we *probably* and *may be* can try handle it in zram:

-- store the previous clen before re-compression
-- check if new clen > saved_clen and if it is - we can't use previously
   allocate handle and need to allocate a new one again. if it's less or
   equal than the saved one - store the object (wasting some space,
   yes. but we are in low mem).

-- we, may be, also can try harder in zsmalloc. once we detected that
   zsmllaoc has failed, then we can declare it as an emergency and
   store objects of size X in higher classes (assuming that there is a
   bigger size class available with allocated and unused object).

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ