[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnraAlR9QeYhd628@hovoldconsulting.com>
Date: Tue, 25 Jun 2024 16:53:54 +0200
From: Johan Hovold <johan@...nel.org>
To: Doug Anderson <dianders@...omium.org>
Cc: Johan Hovold <johan+linaro@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jiri Slaby <jirislaby@...nel.org>,
Konrad Dybcio <konrad.dybcio@...aro.org>,
Bjorn Andersson <andersson@...nel.org>,
linux-arm-msm@...r.kernel.org, linux-serial@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] serial: qcom-geni: fix hard lockup on buffer flush
On Mon, Jun 24, 2024 at 01:45:17PM -0700, Doug Anderson wrote:
> Also: if we're looking at quick/easy to land and just fix the hard
> lockup, I'd vote for this (I can send a real patch, though I'm about
> to go on vacation):
>
> --
>
> @@ -904,8 +904,8 @@ static void qcom_geni_serial_handle_tx_fifo(struct
> uart_port *uport,
> goto out_write_wakeup;
>
> if (!port->tx_remaining) {
> - qcom_geni_serial_setup_tx(uport, pending);
> - port->tx_remaining = pending;
> + port->tx_remaining = min(avail, pending);
> + qcom_geni_serial_setup_tx(uport, port->tx_remaining);
>
> irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN);
> if (!(irq_en & M_TX_FIFO_WATERMARK_EN))
>
> --
>
> That will fix the hard lockup, is short and sweet, and also doesn't
> end up outputting NUL bytes.
Yeah, this might be a good stop gap even if performance suffers.
> I measured time with that. I've been testing with a file I created
> called "alphabet.txt" that just contains the letters A-Z repeated 3
> times followed by a "\n", over and over again. I think gmail will kill
> me with word wrapping, but basically:
> head -200 /var/alphabet.txt | wc
> 200 200 15800
>
> Before my patch I ran `time head -200 /var/alphabet.txt` and I got:
>
> real 0m1.386s
>
> After my patch I ran the same thing and got:
>
> real 0m1.409s
>
> So it's slower, but that's not 25% slower. I get 1.7% slower:
>
> In [6]: (1.409 - 1.386) / 1.386 * 100
> Out[6]: 1.659451659451669
>
> IMO that seems like a fine slowdown in order to avoid printing NUL bytes.
With my 500K dmesg file test I see a similar performance drop as with
your full series even if seems to behave slightly better (e.g. 20% drop
instead of 24%).
Johan
Powered by blists - more mailing lists