lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGxU2F5yvXMMwn0Zad8hE+jZC8PVdS+U0tpG7xQcSgEdKrwmyQ@mail.gmail.com>
Date: Fri, 13 Dec 2024 17:24:17 +0100
From: Stefano Garzarella <sgarzare@...hat.com>
To: Michal Luczaj <mhal@...x.co>
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH net-next 2/4] vsock/test: Add test for accept_queue memory leak

On Fri, Dec 13, 2024 at 5:15 PM Michal Luczaj <mhal@...x.co> wrote:
>
> On 12/13/24 15:47, Stefano Garzarella wrote:
> > On Fri, Dec 13, 2024 at 03:27:53PM +0100, Michal Luczaj wrote:
> >> On 12/13/24 12:55, Stefano Garzarella wrote:
> >>> On Thu, Dec 12, 2024 at 11:12:19PM +0100, Michal Luczaj wrote:
> >>>> On 12/10/24 17:18, Stefano Garzarella wrote:
> >>>>> [...]
> >>>>> What about using `vsock_stream_connect` so you can remove a lot of
> >>>>> code from this function (e.g. sockaddr_vm, socket(), etc.)
> >>>>>
> >>>>> We only need to add `control_expectln("LISTENING")` in the server which
> >>>>> should also fix my previous comment.
> >>>>
> >>>> Sure, I followed your suggestion with
> >>>>
> >>>>    tout = current_nsec() + ACCEPTQ_LEAK_RACE_TIMEOUT * NSEC_PER_SEC;
> >>>>    do {
> >>>>            control_writeulong(RACE_CONTINUE);
> >>>>            fd = vsock_stream_connect(opts->peer_cid, opts->peer_port);
> >>>>            if (fd >= 0)
> >>>>                    close(fd);
> >>>
> >>> I'd do
> >>>             if (fd < 0) {
> >>>                     perror("connect");
> >>>                     exit(EXIT_FAILURE);
> >>>             }
> >>>             close(fd);
> >>
> >> I think that won't fly. We're racing here with close(listener), so a
> >> failing connect() is expected.
> >
> > Oh right!
> > If it doesn't matter, fine with your version, but please add a comment
> > there, otherwise we need another barrier with control messages.
> >
> > Or another option is to reuse the control message we already have to
> > close the previous listening socket, so something like this:
> >
> > static void test_stream_leak_acceptq_server(const struct test_opts *opts)
> > {
> >       int fd = -1;
> >
> >       while (control_readulong() == RACE_CONTINUE) {
> >               /* Close the previous listening socket after receiving
> >                * a control message, so we are sure the other side
> >                * already connected.
> >                */
> >               if (fd >= 0)
> >                       close(fd);
> >               fd = vsock_stream_listen(VMADDR_CID_ANY, opts->peer_port);
> >               control_writeln("LISTENING");
> >       }
> >
> >       if (fd >= 0)
> >               close(fd);
> > }
>
> I'm afraid this won't work either. Just to be clear: the aim is to attempt
> connect() in parallel with close(listener). It's not about establishing
> connection. In fact, if the connection has been established, it means we
> failed reaching the right condition.
>
> In other words, what I propose is:
>
> client loop             server loop
> -----------             -----------
> write(CONTINUE)
>                         expect(CONTINUE)
>                         listen()
>                         write(LISTENING)
> expect(LISTENING)
> connect()               close()                 // bang, maybe
>
> And, if I understand correctly, you are suggesting:
>
> client loop             server loop
> -----------             -----------
> write(CONTINUE)
>                         expect(CONTINUE)
>                         listen()
>                         write(LISTENING)
> expect(LISTENING)
> connect()                                       // no close() to race
> // 2nd iteration
> write(CONTINUE)
>                         // 2nd iteration
>                         expect(CONTINUE)
>                         close()                 // no connect() to race
>                         listen()
>                         write(LISTENING)
> expect(LISTENING)
> connect()                                       // no close() to race
>
> Hope it makes sense?
>

Sorry, it's Friday ;-P

Yep, now it makes sense, so please add a little comment that the goal
is to stress the race between connect() and close(listener).

Have a nice weekend,
Stefano


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ