[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b360ed542526da0a510988ce30545f429a7da000.camel@trillion01.com>
Date: Thu, 20 May 2021 00:13:19 -0400
From: Olivier Langlois <olivier@...llion01.com>
To: Jens Axboe <axboe@...nel.dk>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Stefan Metzmacher <metze@...ba.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andy Lutomirski <luto@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
io-uring <io-uring@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH] io_thread/x86: don't reset 'cs', 'ss', 'ds' and 'es'
registers for io_threads
Hi Jens,
On Wed, 2021-05-12 at 14:55 -0600, Jens Axboe wrote:
>
> > Jens, have you played with core-dumping when there are active
> > io_uring
> > threads? There's a test-program in that github issue report..
>
> Yes, I also did that again after the report, and did so again right now
> just to verify. I'm not seeing any issues with coredumps being
> generated
> if the app crashes, or if I send it SIGILL, for example... I also just
> now tried Olivier's test case, and it seems to dump just fine for me.
>
> I then tried backing out the patch from Stefan, and it works fine with
> that reverted too. So a bit puzzled as to what is going on here...
>
> Anyway, I'll check in on that github thread and see if we can narrow
> this down.
>
I know that my test case isn't conclusive. It is a failed attempt to
capture what my program is doing.
The priority of investigating my core dump issue has substantially
dropped last week because I did solve my primary issue (A buffer leak
in the provided buffers to io_uring during disconnection). My program
did run for days but it did crash morning without any core dump again.
It is a very frustrating situation because it would probably be a bug
trivial to diagnostic and fix but without the core, the logs are opaque
and they just don't give no clue about why the program did crash.
A key characteristic of my program, it is that it generates at least 1
io-worker thread per io_uring instance.
Oddly enough, I am having a hard time recreating a test case that will
generate io-worker threads.
My first attempt was with the github issue test-case. I have kept
tweaking it and I know that I will find the right sequence to get io-
worker threads spawned.
I suspect that once you meet that condition, it might be sufficient to
trigger the core dump generation problem.
I have also tried to run benchmark io_uring with
https://github.com/frevib/io_uring-echo-server/blob/io-uring-feat-fast-poll/benchmarks/benchmarks.md
(If you give it a try, make sure you erase its private out-of-date
liburing copy before compiling it...)
This didn't generate any io-worker thread neither.
In a nutshell here is what my program does for most of its 85-86
sockets:
1. create TCP socket
2. Set O_NONBLOCK to it
3. Call connect()
4. Use IORING_OP_POLL_ADD with POLLOUT to be notified when the
connection completes
5. Once connection is completed, clear the socket O_NONBLOCK flag, use
IORING_OP_WRITE to send a request
6. Submit a IORING_OP_READ with IOSQE_BUFFER_SELECT to read server
reply asynchronously.
Here are 2 more notes about the sequence:
a) If you wonder about the flip-flop about blocking and non-nblocking,
it is because I have adapated existing code to use io_uring. To
minimize the required code change, I left untouched the non-blocking
connection code.
b) If I add IOSQE_ASYNC to the IORING_OP_READ, io_uring will generate a
lot of io-worker threads. I mean a lot... You can see here:
https://github.com/axboe/liburing/issues/349
So what I am currently doing is to tweak my test-case to emulate as
much as possible the described sequence to have some io-worker threads
spawn and then force a core dump to validate that it is the io-worker
thread presence that is causing the core dump generation issue (or
not!)
Quick question to the devs: Is there any example program bundled with
liburing that is creating some io-workers thread in a sure way?
Greetings,
Olivier
Powered by blists - more mailing lists