[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wns6gy67.ffs@nanos.tec.linutronix.de>
Date: Mon, 10 May 2021 21:56:48 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: xuyihang <xuyihang@...wei.com>, Ming Lei <ming.lei@...hat.com>
Cc: Peter Xu <peterx@...hat.com>, Christoph Hellwig <hch@....de>,
Jason Wang <jasowang@...hat.com>,
Luiz Capitulino <lcapitulino@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Michael S. Tsirkin" <mst@...hat.com>, minlei@...hat.com,
liaochang1@...wei.com
Subject: Re: Virtio-scsi multiqueue irq affinity
Yihang,
On Mon, May 10 2021 at 16:48, xuyihang wrote:
> 在 2021/5/8 20:26, Thomas Gleixner 写道:
>> Can you please provide a more detailed description of your system?
>> - Kernel version
> This experiment run on linux-4.19
Again. Please provide reports against the most recent mainline version
and not against some randomly picked kernel variant.
> If we make some change on this experiment:
>
> 1. Make this RT application use less CPU time instead of 100%, the problem
> disappear.
>
> 2, If we change rq_affinity to 2, in order to avoid handle softirq on
> the same core of RT thread, the problem also disappear. However, this approach
> result in about 10%-30% random write proformance deduction comparing
> to rq_affinity = 1, since it may has better cache utilization.
> echo 2 > /sys/block/sda/queue/rq_affinity
>
> Therefore, I want to exclude some CPU from managed irq on boot
> parameter,
Why has this realtime thread to run on CPU0 and cannot move to some
other CPU?
> which has simliar approach to 11ea68f553e2 ("genirq, sched/isolation:
> Isolate from handling managed interrupts").
Why can't you use the existing isolation mechanisms?
Thanks,
tglx
Powered by blists - more mailing lists