Merge the drm-next tree to pick up the DRM DSC helpers (merged via
drm-intel-next tree). MSM DSC v1.2 patches depend on these helpers.
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
If the hangcheck timer expires, check if the fw's position in the
cmdstream has advanced (changed) since last timer expiration, and
allow it up to three additional "extensions" to it's alotted time.
The intention is to continue to catch "shader stuck in a loop" type
hangs quickly, but allow more time for things that are actually
making forward progress.
Because we need to sample the CP state twice to detect if there has
not been progress, this also cuts the the timer's duration in half.
v2: Fix typo (REG_A6XX_CP_CSQ_IB2_STAT), add comment
v3: Only halve hangcheck timer duration for generations which
support progress detection (hdanton); removed unused a5xx
progress (without knowing how to adjust for data buffered
in ROQ it is too likely to report a false negative)
v4: Comment updates to better describe the total hangcheck
duration when progress detection is applied
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
Tested-by: Chia-I Wu <olvaffe@gmail.com> # dEQP-GLES2.functional.flush_finish.wait
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Patchwork: https://patchwork.freedesktop.org/patch/511584/
Link: https://lore.kernel.org/r/20221114193049.1533391-3-robdclark@gmail.com
Prior to the last commit, this could result in setting the GPU
written fence value back to an older value, if we had missed
updating completed_fence prior to suspend. This was mostly
harmless as the GPU would eventually overwrite it again with
the correct value. But we should just not do this. Instead
just leave a sanity check that the fence looks plausible (in
case the GPU scribbled on memory).
Reported-by: Steev Klimaszewski <steev@kali.org>
Fixes: 95d1deb02a ("drm/msm/gem: Add fenced vma unpin")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Steev Klimaszewski <steev@kali.org>
Patchwork: https://patchwork.freedesktop.org/patch/490138/
Link: https://lore.kernel.org/r/20220618161120.3451993-2-robdclark@gmail.com
I noticed while looking at some traces, that we could miss calls to
msm_update_fence(), as the irq could have raced with retire_submits()
which could have already popped the last submit on a ring out of the
queue of in-flight submits. But walking the list of submits in the
irq handler isn't really needed, as dma_fence_is_signaled() will dtrt.
So lets just drop it entirely.
v2: use spin_lock_irqsave/restore as we are no longer protected by the
spin_lock_irqsave/restore() in update_fences()
Reported-by: Steev Klimaszewski <steev@kali.org>
Fixes: 95d1deb02a ("drm/msm/gem: Add fenced vma unpin")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Steev Klimaszewski <steev@kali.org>
Patchwork: https://patchwork.freedesktop.org/patch/490136/
Link: https://lore.kernel.org/r/20220618161120.3451993-1-robdclark@gmail.com
In the cause of using the GPU via virtgpu, the host side process is
really a sort of proxy, and not terribly interesting from the PoV of
crash/fault logging. Add a way to override these per process so that
we can see the guest process's name.
v2: Handle kmalloc failure, add comment to explain kstrdup returns
NULL if passed NULL [Dan Carpenter]
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220317165144.222101-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Add a SYSPROF param for system profiling tools like Mesa's pps-producer
(perfetto) to control behavior related to system-wide performance
counter collection. In particular, for profiling, one wants to ensure
that GPU context switches do not effect perfcounter state, and might
want to suppress suspend (which would cause counters to lose state).
v2: Swap the order in msm_file_private_set_sysprof() [sboyd] and
initialize the sysprof_active refcount to one (because the under/
overflow checking in refcount_t doesn't expect a 0->1 transition)
meaning that values greater than 1 means sysprof is active.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220304005317.776110-4-robdclark@gmail.com
Other processes don't need to know about faults that they are isolated
from by virtue of address space isolation. They are only interested in
whether some of their state might have been corrupted.
But to be safe, also track unattributed faults. This case should really
never happen unless there is a kernel bug (and that would never happen,
right?)
v2: Instead of adding a new param, just change the behavior of the
existing param to match what userspace actually wants [anholt]
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5934
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220201161618.778455-3-robdclark@gmail.com
Reviewed-by: Emma Anholt <emma@anholt.net>
Signed-off-by: Rob Clark <robdclark@chromium.org>
System suspend uses pm_runtime_force_suspend(), which cheekily bypasses
the runpm reference counts. This doesn't actually work so well when the
GPU is active. So add a reasonable delay waiting for the GPU to become
idle.
Alternatively we could just return -EBUSY in this case, but that has the
disadvantage of causing system suspend to fail.
v2: s/ret/remaining [sboyd], and switch to using active_submits count
to ensure we aren't racing with submit cleanup (and devfreq idle
work getting scheduled, etc)
v3: fix inverted logic
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Link: https://lore.kernel.org/r/20220108180913.814448-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
For existing adrenos, there is one or more ringbuffer, depending on
whether preemption is supported. When preemption is supported, each
ringbuffer has it's own priority. A submitqueue (which maps to a
gl context or vk queue in userspace) is mapped to a specific ring-
buffer at creation time, based on the submitqueue's priority.
Each ringbuffer has it's own drm_gpu_scheduler. Each submitqueue
maps to a drm_sched_entity. And each submit maps to a drm_sched_job.
Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/4
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-10-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
This adds a few things to try and make frequency scaling better match
the workload:
1) Longer polling interval to avoid whip-lashing between too-high and
too-low frequencies in certain workloads, like mobile games which
throttle themselves to 30fps.
Previously our polling interval was short enough to let things
ramp down to minimum freq in the "off" frame, but long enough to
not react quickly enough when rendering started on the next frame,
leading to uneven frame times. (Ie. rather than a consistent 33ms
it would alternate between 16/33/48ms.)
2) Awareness of when the GPU is active vs idle. Since we know when
the GPU is active vs idle, we can clamp the frequency down to the
minimum while it is idle. (If it is idle for long enough, then
the autosuspend delay will eventually kick in and power down the
GPU.)
Since devfreq has no knowledge of powered-but-idle, this takes a
small bit of trickery to maintain a "fake" frequency while idle.
This, combined with the longer polling period allows devfreq to
arrive at a reasonable "active" frequency, while still clamping
to minimum freq when idle to reduce power draw.
3) Boost. Because simple_ondemand needs to see a certain threshold
of busyness to ramp up, we could end up needing multiple polling
cycles before it reacts appropriately on interactive workloads
(ex. scrolling a web page after reading for some time), on top
of the already lengthened polling interval, when we see a idle
to active transition after a period of idle time we boost the
frequency that we return to.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210726144653.2180096-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Previously we only held obj lock in the _active_get() path, and relied
on atomic_dec_return() to not be racy in the _active_put() path where
obj lock was not held.
But this is a false sense of security. Unlike obj lifetime refcnt,
where you do not expect to *increase* the refcnt after the last put
(which would mean that something has gone horribly wrong with the
object liveness reference counting), the active_count can increase
again from zero. Racing _active_put()s and _active_get()s could leave
the obj on the wrong mm list.
But in the retire path, immediately after the _active_put(), the
_unpin_iova() would acquire obj lock. So just move the locking earlier
and rely on that to protect obj->active_count.
Fixes: c5c1643cef ("drm/msm: Drop struct_mutex from the retire path")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Register GPU as a devfreq cooling device so that it can be passively
cooled by the thermal framework.
Signed-off-by: Akhil P Oommen <akhilpo@codeaurora.org>
Tested-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>