[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ercejt6r2qjkbpaoueh66nred4ooqb5wskx5m3xn2slb5kasw@zwssje3pm4mu>
Date: Wed, 3 May 2023 13:01:24 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Chaitanya Kulkarni <chaitanyak@...dia.com>
Cc: "linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
Shin'ichiro Kawasaki <shinichiro@...tmail.com>,
Hannes Reinecke <hare@...e.de>
Subject: Re: [PATCH blktests v3 09/12] common/fio: Limit number of random jobs
On Wed, May 03, 2023 at 09:41:37AM +0000, Chaitanya Kulkarni wrote:
> On 5/3/23 01:02, Daniel Wagner wrote:
> > Limit the number of random threads to 32 for big machines. This still
> > gives enough randomness but limits the resource usage.
> >
> > Signed-off-by: Daniel Wagner <dwagner@...e.de>
> > ---
>
> I don't think we should change this, the point of all the tests is
> to not limit the resources but use threads at least equal to
> $(nproc), see recent patches from lenovo they have 448 cores,
> limiting 32 is < 10% CPUs and that is really small number for
> a large machine if we decide to run tests on that machine ...
I just wonder how handle the limits for the job size. Hannes asked to limit it
to 32 CPUs so that the job size doesn't get small, e.g. nvme_img_size=16M job
size per job with 448 CPUs is roughly 36kB. Is this good, bad or does it even
make sense? I don't know.
My question is what should the policy be? Should we reject configuration which
try to run too small jobs sizes? Reject anything below 1M for example? Or is
there a metric which we could as base for a limit calculation (disk geometry)?
Powered by blists - more mailing lists