lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 May 2017 14:45:33 +0900
From:   Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        kernel-team <kernel-team@....com>
Subject: Re: [PATCH 2/2] zram: do not count duplicated pages as compressed

On (05/16/17 14:26), Minchan Kim wrote:
[..]
> > +       /*
> > +        * Free memory associated with this sector
> > +        * before overwriting unused sectors.
> > +        */
> > +       zram_slot_lock(zram, index);
> > +       zram_free_page(zram, index);
> 
> Hmm, zram_free should happen only if the write is done successfully.
> Otherwise, we lose the valid data although write IO was fail.

but would this be correct? the data is not valid - we failed to store
the valid one. but instead we assure application that read()/swapin/etc.,
depending on the usage scenario, is successful (even though the data is
not what application really expects to see), application tries to use the
data from that page and probably crashes (dunno, for example page contained
hash tables with pointers that are not valid anymore, etc. etc.).

I'm not optimistic about stale data reads; it basically will look like
data corruption to the application.

> > +
> >         if (zram_same_page_write(zram, index, page))
> > -               return 0;
> > +               goto out_unlock;
> >  
> >         entry = zram_dedup_find(zram, page, &checksum);
> >         if (entry) {
> >                 comp_len = entry->len;
> > +               zram_set_flag(zram, index, ZRAM_DUP);
>
> In case of hitting dedup, we shouldn't increase compr_data_size.

no, we should not. you are right. my "... patch" is incomplete and
wrong. please don't pay too much attention to it.


> If we fix above two problems, do you think it's still cleaner?
> (I don't mean to be reluctant with your suggestion. Just a
>  real question to know your thought.:)

do you mean code duplication and stale data read?

I'd probably prefer to address stale data reads separately.
but it seems that stale reads fix will re-do parts of your
0002 patch and, at least potentially, reduce code duplication.

so we can go with your 0002 and then stale reads fix will try
to reduce code duplication (unless we want to have 4 places doing
the same thing :) )

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ