lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 4 Mar 2012 12:14:00 +0300
From:	Sergei Trofimovich <slyich@...il.com>
To:	Glauber Costa <glommer@...allels.com>
Cc:	Jason Wang <jasowang@...hat.com>, <linux-kernel@...r.kernel.org>,
	<netdev@...r.kernel.org>, "David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCHv 2] tcp: properly initialize tcp memory limits part 2
 (fix nfs regression)

On Sat, 3 Mar 2012 20:27:17 -0300
Glauber Costa <glommer@...allels.com> wrote:

> On 03/03/2012 11:43 AM, Sergei Trofimovich wrote:
> > On Sat, 3 Mar 2012 11:16:41 -0300
> > Glauber Costa<glommer@...allels.com>  wrote:
> >
> >> On 03/02/2012 02:50 PM, Sergei Trofimovich wrote:
> >>>>>> The change looks like a typo (division flipped to multiplication):
> >>>>>>> limit = nr_free_buffer_pages() / 8;
> >>>>>>> limit = nr_free_buffer_pages()<<    (PAGE_SHIFT - 10);
> >>>>>
> >>>>> Hi, thanks for the reporting. It's not a typo. It was previously:
> >>>>> sysctl_tcp_mem[1]<<   (PAGE_SHIFT -  7). Looks like we need to do the
> >>>>> limit check before shift the value. Please try the following patch, thanks.
> >>>>
> >>>> Still does not help. I test it by checking sha1sum of a large file over NFS
> >>>> (small files seem to work simetimes):
> >>>>
> >>>>       $ strace sha1sum /gentoo/distfiles/gcc-4.6.2.tar.bz2
> >>>>       ...
> >>>>       open("/gentoo/distfiles/gcc-4.6.2.tar.bz2", O_RDONLY
> >>>>       <HUNG>
> >>>> After a certain timeout dmesg gets odd spam:
> >>>> [  314.848094] nfs: server vmhost not responding, still trying
> >>>> [  314.848134] nfs: server vmhost not responding, still trying
> >>>> [  314.848145] nfs: server vmhost not responding, still trying
> >>>> [  314.957047] nfs: server vmhost not responding, still trying
> >>>> [  314.957066] nfs: server vmhost not responding, still trying
> >>>> [  314.957075] nfs: server vmhost not responding, still trying
> >>>> [  314.957085] nfs: server vmhost not responding, still trying
> >>>> [  314.957100] nfs: server vmhost not responding, still trying
> >>>> [  314.958023] nfs: server vmhost not responding, still trying
> >>>> [  314.958035] nfs: server vmhost not responding, still trying
> >>>> [  314.958044] nfs: server vmhost not responding, still trying
> >>>> [  314.958054] nfs: server vmhost not responding, still trying
> >>>>
> >>>> looks like bogus messages. Might be relevant to mishandled timings
> >>>> somewhere else or a bug in nfs code.
> >>>
> >>> And after 120 seconds hung tasks shows it might be an OOM issue
> >>> Likely caused by patch, as it's a 2GB RAM +4GB swap amd64 box
> >>> not running anything heavy:
> >>
> >> That is a bit weird.
> >>
> >> First because with Jason's patch, we should end up with the very same
> >> calculation, at the same exact order, as it was in older kernels.
> >> Second, because by shifting<<  10, you should be ending up with very
> >> small numbers, effectively having tcp_rmem[1] == tcp_rmem[2], and the
> >> same for wmem.
> >>
> >> Can you share which numbers you end up with at
> >> /proc/sys/net/ipv4/tcp_{r,w}mem ?
> >>
> >
> > Sure:
> >
> >      $ cat /proc/sys/net/ipv4/tcp_{r,w}mem
> >      4096    87380   1999072
> >      4096    16384   1999072
> >
> Sergei,
> 
> Sorry for not being clearer. I was expecting you'd post those values
> both in the scenario in which you see the bug, and in the scenario you
> don't.

Ah, I see.  Sorry. Patches are on top of v3.3-rc5-166-g1f033c1. Buggy one:
> -       limit = nr_free_buffer_pages() << (PAGE_SHIFT - 10);
> -       limit = max(limit, 128UL);
> +       limit = nr_free_buffer_pages() / 8;
> +       limit = max(limit, 128UL) << (PAGE_SHIFT - 7);
>         max_share = min(4UL*1024*1024, limit);
> +       printk(KERN_INFO "TCP: max_share=%u\n", max_share);
    $ cat /proc/sys/net/ipv4/tcp_{r,w}mem
    4096    87380   1999072
    4096    16384   1999072

Working one:
> -       limit = nr_free_buffer_pages() << (PAGE_SHIFT - 10);
> +       limit = nr_free_buffer_pages() >> (PAGE_SHIFT - 10);
>         limit = max(limit, 128UL);
>         max_share = min(4UL*1024*1024, limit);
> +       printk(KERN_INFO "TCP: max_share=%u\n", max_share);
    $ cat /proc/sys/net/ipv4/tcp_{r,w}mem
    4096    87380   124942
    4096    16384   124942

> > Nothing special with NFS nere, so I guess it uses UDP.
> > TCP works fine on machine (I do everything via SSH).
> 
> Can you confirm that? If you're using nfs through udp, it makes
> even less sense that the default values of tcp sock mem will harm
> you. So it might be a bug somewhere else...

Rechecked with tcpdump. It uses TCP.

-- 

  Sergei

Download attachment "signature.asc" of type "application/pgp-signature" (199 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ