lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y4URHHISpZaFQmj6@slm.duckdns.org>
Date:   Mon, 28 Nov 2022 09:50:52 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
Cc:     Intel-gfx@...ts.freedesktop.org, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
        Zefan Li <lizefan.x@...edance.com>,
        Dave Airlie <airlied@...hat.com>,
        Daniel Vetter <daniel.vetter@...ll.ch>,
        Rob Clark <robdclark@...omium.org>,
        Stéphane Marchesin <marcheu@...omium.org>,
        "T . J . Mercier" <tjmercier@...gle.com>, Kenny.Ho@....com,
        Christian König <christian.koenig@....com>,
        Brian Welty <brian.welty@...el.com>,
        Tvrtko Ursulin <tvrtko.ursulin@...el.com>
Subject: Re: [RFC 11/13] cgroup/drm: Introduce weight based drm cgroup control

Hello,

On Thu, Nov 24, 2022 at 02:32:25PM +0000, Tvrtko Ursulin wrote:
> > Soft limits is a bit of misnomer and can be confused with best-effort limits
> > such as memory.high. Prolly best to not use the term.
> 
> Are you suggesting "best effort limits" or "best effort <something>"? It
> would sounds good to me if we found the right <something>. Best effort
> budget perhaps?

A more conventional name would be hierarchical weighted distribution.

> Also, when you mention scalability you are concerned about multiple tree
> walks I have per iteration? I wasn't so much worried about that, definitely
> not for the RFC, but even in general due relatively low frequency of
> scanning and a good amount of less trivial cost being outside the actual
> tree walks (drm client walks, GPU utilisation calculations, maybe more). But
> perhaps I don't have the right idea on how big cgroups hierarchies can be
> compared to number of drm clients etc.

It's just a better way doing this kind of weight based scheduling. It's
simpler, more scalable and easier to understand how things are working. The
basic idea is pretty simple - each schedulable entity gets assigned a
timestamp and whenever it consumes the target resource, its time is wound
forward by the consumption amount divided by its absolute share - e.g. if
cgroup A deserves 25% of the entire thing and it ran for 1s, its time is
wound forward by 1s / 0.25 == 4s. There's a rbtree keyed by these timestamps
and anything wanting to consume gets put on that tree and whatever is at the
head of the tree is the next thing to run.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ