lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d54ca59-9c22-0b75-3087-3718b30b8d11@farpost.com>
Date:   Fri, 2 Aug 2019 11:40:02 +1000
From:   Sergei Turchanov <turchanov@...post.com>
To:     NeilBrown <neilb@...e.com>
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        Alexander Viro <viro@...iv.linux.org.uk>
Subject: Re: [BUG] lseek on /proc/meminfo is broken in 4.19.59 maybe due to
 commit 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and
 interface")

Hello!

Yes, your patch fixed this bug.
Thank you very much!

With best regards,
Sergei.

On 01.08.2019 19:14, NeilBrown wrote:
> On Thu, Aug 01 2019, Sergei Turchanov wrote:
>
>> Hello!
>>
>> [
>>    As suggested in previous discussion this behavior may be caused by your
>>    commit 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and interface")
>> ]
> Yes.... I think I can see what happened.
>   removing:
> -               if (!m->count) {
> -                       m->from = 0;
> -                       m->index++;
> -               }
>
> from seq_read meant that ->index didn't get updated in a case that it
> needs to be.
>
> Please confirm that the following patch fixes the problem.
> I think it is correct, but I need to look it over more carefully in the
> morning, and see if I can explain why it is correct.
>
> Thanks for the report.
> NeilBrown
>
> diff --git a/fs/seq_file.c b/fs/seq_file.c
> index 04f09689cd6d..1600034a929b 100644
> --- a/fs/seq_file.c
> +++ b/fs/seq_file.c
> @@ -119,6 +119,7 @@ static int traverse(struct seq_file *m, loff_t offset)
>   		}
>   		if (seq_has_overflowed(m))
>   			goto Eoverflow;
> +		p = m->op->next(m, p, &m->index);
>   		if (pos + m->count > offset) {
>   			m->from = offset - pos;
>   			m->count -= m->from;
> @@ -126,7 +127,6 @@ static int traverse(struct seq_file *m, loff_t offset)
>   		}
>   		pos += m->count;
>   		m->count = 0;
> -		p = m->op->next(m, p, &m->index);
>   		if (pos == offset)
>   			break;
>   	}
>
>
>> Original bug report:
>>
>> Seeking (to an offset within file size) in /proc/meminfo is broken in 4.19.59. It does seek to a desired position, but reading from that position returns the remainder of file and then a whole copy of file. This doesn't happen with /proc/vmstat or /proc/self/maps for example.
>>
>> Seeking did work correctly in kernel 4.14.47. So it seems something broke in the way.
>>
>> Background: this kind of access pattern (seeking to /proc/meminfo) is used by libvirt-lxc fuse driver for virtualized view of /proc/meminfo. So that /proc/meminfo is broken in guests when running kernel 4.19.x.
>>
>>   > On 01.08.2019 17:11, Gao Xiang wrote:
>>> Hi,
>>>
>>> I just took a glance, maybe due to
>>> commit 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and interface")
>>>
>>> I simply reverted it just now and it seems fine... but I haven't digged into this commit.
>>>
>>> Maybe you could Cc NeilBrown <neilb@...e.com> for some more advice and
>>> I have no idea whether it's an expected behavior or not...
>>>
>>> Thanks,
>>> Gao Xiang
>>>
>>> On 2019/8/1 14:16, Sergei Turchanov wrote:
>>
>> $ ./test /proc/meminfo 0        # Works as expected
>>
>> MemTotal:       394907728 kB
>> MemFree:        173738328 kB
>> ...
>> DirectMap2M:    13062144 kB
>> DirectMap1G:    390070272 kB
>>
>> -----------------------------------------------------------------------
>>
>> $ ./test /proc/meminfo 1024     # returns a copy of file after the remainder
>>
>> Will seek to 1024
>>
>>
>> Data read at offset 1024
>> gePages:         0 kB
>> ShmemHugePages:        0 kB
>> ShmemPmdMapped:        0 kB
>> HugePages_Total:       0
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:       2048 kB
>> Hugetlb:               0 kB
>> DirectMap4k:      245204 kB
>> DirectMap2M:    13062144 kB
>> DirectMap1G:    390070272 kB
>> MemTotal:       394907728 kB
>> MemFree:        173738328 kB
>> MemAvailable:   379989680 kB
>> Buffers:          355812 kB
>> Cached:         207216224 kB
>> ...
>> DirectMap2M:    13062144 kB
>> DirectMap1G:    390070272 kB
>>
>> As you see, after "DirectMap1G:" line, a whole copy of /proc/meminfo returned by "read".
>>
>> Test program:
>>
>> #include <sys/types.h>
>> #include <sys/stat.h>
>> #include <unistd.h>
>> #include <fcntl.h>
>> #include <stdio.h>
>> #include <stdlib.h>
>>
>> #define SIZE 1024
>> char buf[SIZE + 1];
>>
>> int main(int argc, char *argv[]) {
>>       int     fd;
>>       ssize_t rd;
>>       off_t   ofs = 0;
>>
>>       if (argc < 2) {
>>           printf("Usage: test <file> [<offset>]\n");
>>           exit(1);
>>       }
>>
>>       if (-1 == (fd = open(argv[1], O_RDONLY))) {
>>           perror("open failed");
>>           exit(1);
>>       }
>>
>>       if (argc > 2) {
>>           ofs = atol(argv[2]);
>>       }
>>       printf("Will seek to %ld\n", ofs);
>>
>>       if (-1 == (lseek(fd, ofs, SEEK_SET))) {
>>           perror("lseek failed");
>>           exit(1);
>>       }
>>
>>       for (;; ofs += rd) {
>>           printf("\n\nData read at offset %ld\n", ofs);
>>           if (-1 == (rd = read(fd, buf, SIZE))) {
>>               perror("read failed");
>>               exit(1);
>>           }
>>           buf[rd] = '\0';
>>           printf(buf);
>>           if (rd < SIZE) {
>>               break;
>>           }
>>       }
>>
>>       return 0;
>> }


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ