On XE_LPM+ platforms the media engines are carved out into a separate
GT but have a common GGTMMADR address range which essentially makes
the GGTT address space to be shared between media and render GT. As a
result any updates in GGTT shall invalidate TLB of GTs sharing it and
similarly any operation on GGTT requiring an action on a GT will have to
involve all GTs sharing it. setup_private_pat was being done on a per
GGTT based as that doesn't touch any GGTT structures moved it to per GT
based.
BSPEC: 63834
v2:
1. Add details to commit msg
2. includes fix for failure to add item to ggtt->gt_list, as suggested
by Lucas
3. as ggtt_flush() is used only for ggtt drop i915_is_ggtt check within
it.
4. setup_private_pat moved out of intel_gt_tiles_init
v3:
1. Move out for_each_gt from i915_driver.c (Jani Nikula)
v4: drop using RCU primitives on ggtt->gt_list as it is not an RCU list
(Matt Roper)
Cc: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221122070126.4813-1-aravind.iddamsetty@intel.com
Commit b97060a99b ("drm/i915/guc: Update intel_gt_wait_for_idle to work
with GuC") extended the API of intel_gt_retire_requests_timeout() with an
extra argument 'remaining_timeout', intended for passing back unconsumed
portion of requested timeout when 0 (success) is returned. However, when
request retirement happens to succeed despite an error returned by a call
to dma_fence_wait_timeout(), that error code (a negative value) is passed
back instead of remaining time. If we then pass that negative value
forward as requested timeout to intel_uc_wait_for_idle(), an explicit BUG
will be triggered.
If request retirement succeeds but an error code is passed back via
remaininig_timeout, we may have no clue on how much of the initial timeout
might have been left for spending it on waiting for GuC to become idle.
OTOH, since all pending requests have been successfully retired, that
error code has been already ignored by intel_gt_retire_requests_timeout(),
then we shouldn't fail.
Assume no more time has been left on error and pass 0 timeout value to
intel_uc_wait_for_idle() to give it a chance to return success if GuC is
already idle.
v3: Don't fail on any error passed back via remaining_timeout.
v2: Fix the issue on the caller side, not the provider.
Fixes: b97060a99b ("drm/i915/guc: Update intel_gt_wait_for_idle to work with GuC")
Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Cc: stable@vger.kernel.org # v5.15+
Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221121145655.75141-2-janusz.krzysztofik@linux.intel.com
The GT MCR code currently relies on uncore->lock to avoid race
conditions on the steering control register during MCR operations. The
*_fw() versions of MCR operations expect the caller to already hold
uncore->lock, while the non-fw variants manage the lock internally.
However the sole callsite of intel_gt_mcr_wait_for_reg_fw() does not
currently obtain the forcewake lock, allowing a potential race condition
(and triggering an assertion on lockdep builds). Furthermore, since
'wait for register value' requests may not return immediately, it is
undesirable to hold a fundamental lock like uncore->lock for the entire
wait and block all other MMIO for the duration; rather the lock is only
needed around the MCR read operations and can be released during the
delays.
Convert intel_gt_mcr_wait_for_reg_fw() to a non-fw variant that will
manage uncore->lock internally. This does have the side effect of
causing an unnecessary lookup in the forcewake table on each read
operation, but since the caller is still holding the relevant forcewake
domain, this will ultimately just incremenent the reference count and
won't actually cause any additional MMIO traffic.
In the future we plan to switch to a dedicated MCR lock to protect the
steering critical section rather than using the overloaded and
high-traffic uncore->lock; on MTL and beyond the new lock can be
implemented on top of the hardware-provided synchonization mechanism for
steering.
Fixes: 3068bec83e ("drm/i915/gt: Add intel_gt_mcr_wait_for_reg_fw()")
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221117173358.1980230-1-matthew.d.roper@intel.com
(cherry picked from commit 192bb40f03)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
GVT Changes:
- gvt-next stuff mostly with refactor for the new MDEV interface.
i915 Changes:
- PSR fixes and improvements (Jouni)
- DP DSC fixes (Vinod, Jouni)
- More general display cleanups (Jani)
- More display collor management cleanup targetting degamma (Ville)
- remove circ_buf.h includes (Jiri)
- wait power off delay at driver remove to optimize probe (Jani)
- More audio cleanup targeting the ELD precompute readout (Ville)
- Enable DC power states on all eDP ports (Imre)
- RPL-P stepping info (Matt Atwood)
- MTL enabling patches (RK)
- Removal of DG2 force_probe (Matt)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/Y3f71obyEkImXoUF@intel.com
The GT MCR code currently relies on uncore->lock to avoid race
conditions on the steering control register during MCR operations. The
*_fw() versions of MCR operations expect the caller to already hold
uncore->lock, while the non-fw variants manage the lock internally.
However the sole callsite of intel_gt_mcr_wait_for_reg_fw() does not
currently obtain the forcewake lock, allowing a potential race condition
(and triggering an assertion on lockdep builds). Furthermore, since
'wait for register value' requests may not return immediately, it is
undesirable to hold a fundamental lock like uncore->lock for the entire
wait and block all other MMIO for the duration; rather the lock is only
needed around the MCR read operations and can be released during the
delays.
Convert intel_gt_mcr_wait_for_reg_fw() to a non-fw variant that will
manage uncore->lock internally. This does have the side effect of
causing an unnecessary lookup in the forcewake table on each read
operation, but since the caller is still holding the relevant forcewake
domain, this will ultimately just incremenent the reference count and
won't actually cause any additional MMIO traffic.
In the future we plan to switch to a dedicated MCR lock to protect the
steering critical section rather than using the overloaded and
high-traffic uncore->lock; on MTL and beyond the new lock can be
implemented on top of the hardware-provided synchonization mechanism for
steering.
Fixes: 3068bec83e ("drm/i915/gt: Add intel_gt_mcr_wait_for_reg_fw()")
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221117173358.1980230-1-matthew.d.roper@intel.com
Turns out many of the files that need i915_reg.h get it implicitly via
{display/intel_de.h, gt/intel_context.h} -> i915_trace.h -> i915_irq.h
-> i915_reg.h. Since i915_trace.h doesn't actually need i915_irq.h,
makes sense to drop it, but that requires adding quite a few new
includes all over the place.
Prefer including i915_reg.h where needed instead of adding another
implicit include, because eventually we'll want to split up i915_reg.h
and only include the specific registers at each place.
Also some places actually needed i915_irq.h too.
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/6e78a2e0ac1bffaf5af3b5ccc21dff05e6518cef.1668008071.git.jani.nikula@intel.com
Rather than treating multicast registers as 'i915_reg_t' let's define
them as a completely new type. This will allow the compiler to help us
make sure we're using multicast-aware functions to operate on multicast
registers.
This plan does break down a bit in places where we're just maintaining
heterogeneous lists of registers (e.g., various MMIO whitelists used by
perf, GVT, etc.) rather than performing reads/writes. We only really
care about the offset in those cases, so for now we can "cast" the
registers as non-MCR, leaving us with a list of i915_reg_t's, but we may
want to look for better ways to store mixed collections of i915_reg_t
and i915_mcr_reg_t in the future.
v2:
- Add TLB invalidation registers
v3:
- Make type checking of i915_mmio_reg_offset() stricter. It will
accept either i915_reg_t or i915_mcr_reg_t, but will now raise a
compile error if any other type is passed, even if that type contains
a 'reg' field. (Jani)
- Drop a ton of GVT changes; allowing i915_mmio_reg_offset() to take
either an i915_reg_t or an i915_mcr_reg_t means that the huge lists
of MMIO_D*() macros used in GVT will continue to work without
modification. We need only make changes to structures that have an
explicit i915_reg_t in them now.
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221014230239.1023689-13-matthew.d.roper@intel.com
On Xe_HP the fault registers are now in a multicast register range.
However as part of the GAM these registers follow special rules and we
need only read from the "primary" GAM's instance to get the information
we need. So a single intel_gt_mcr_read_any() (which will automatically
steer to the primary GAM) is sufficient; we don't need to loop over each
instance of the MCR register.
v2:
- Update more instances of fault registers. (Bala)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221014230239.1023689-7-matthew.d.roper@intel.com
Starting in Xe_HP, several registers our driver works with have been
converted from singleton registers into replicated registers with
multicast behavior. Although the registers are still located at the
same MMIO offsets as on previous platforms, let's duplicate the register
definitions in preparation for upcoming patches that will handle
multicast registers in a special manner.
The registers that are now replicated on Xe_HP are:
* PAT_INDEX (mslice replication)
* FF_MODE2 (gslice replication)
* COMMON_SLICE_CHICKEN3 (gslice replication)
* SLICE_COMMON_ECO_CHICKEN1 (gslice replication)
* SLICE_UNIT_LEVEL_CLKGATE (gslice replication)
* LNCFCMOCS (lncf replication)
Note that there are a couple places in selftest_mocs.c where the
gen9 version of LNCFCMOCS is still used without regards for which
platform we're on. Those cases are just doing an offset lookup and not
issuing any CPU reads/writes of the register, so the potentially
multicast nature of the register doesn't come into play.
v2:
- Add commit message note about the unconditional GEN9_LNCFCMOCS usage
in selftest_mocs. (Bala)
- Include some additional TLB registers.
Bspec: 66534
Cc: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221014230239.1023689-3-matthew.d.roper@intel.com
Daniele needs 84d4333c1e ("misc/mei: Add NULL check to component match
callback functions") in order to merge the DG2 HuC patches.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Release all mmap mapping for all lmem objects which are associated
with userfault such that, while pcie function in D3hot, any access
to memory mappings will raise a userfault.
Runtime resume the dgpu(when gem object lies in lmem).
This will transition the dgpu graphics function to D0
state if it was in D3 in order to access the mmap memory
mappings.
v2:
- Squashes the patches. [Matt Auld]
- Add adequate locking for lmem_userfault_list addition. [Matt Auld]
- Reused obj->userfault_count to avoid double addition. [Matt Auld]
- Added i915_gem_object_lock to check
i915_gem_object_is_lmem. [Matt Auld]
v3:
- Use i915_ttm_cpu_maps_iomem. [Matt Auld]
- Fix 'ret == 0 to ret == VM_FAULT_NOPAGE'. [Matt Auld]
- Reuse obj->userfault_count as a bool 0 or 1. [Matt Auld]
- Delete the mmaped obj from lmem_userfault_list in obj
destruction path. [Matt Auld]
- Get a wakeref for object destruction patch. [Matt Auld]
- Use intel_wakeref_auto to delay runtime PM. [Matt Auld]
v4:
- Avoid using mmo offset to get the vma_node. [Matt Auld]
- Added comment to use the lmem_userfault_lock. [Matt Auld]
- Get lmem_userfault_lock in i915_gem_object_release_mmap_offset.
[Matt Auld]
- Fixed kernel test robot generated warning.
v5:
- Addressed the cosmetics comments. [Andi]
- Changed i915_gem_runtime_pm_object_release_mmap_offset() name to
i915_gem_object_runtime_pm_release_mmap_offset() to be rhythmic.
PCIe Specs 5.3.1.4.1
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6331
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220913152714.16541-3-anshuman.gupta@intel.com
We're going to introduce an additional intel_gt for MTL's media unit
soon. Let's provide a bit more multi-GT initialization framework in
preparation for that. The initialization will pull the list of GTs for
a platform from the device info structure. Although necessary for the
immediate MTL media enabling, this same framework will also be used
farther down the road when we enable remote tiles on xehpsdv and pvc.
v2:
- Re-add missing test for !HAS_EXTRA_GT_LIST in intel_gt_probe_all().
v3:
- Move intel_gt_definition struct to intel_gt_types.h. (Jani)
- Drop gtdef->setup(). For now we'll just use a switch() based on GT
type since we don't have too many different handlers for the
foreseeable future. (Jani)
Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220906234934.3655440-6-matthew.d.roper@intel.com
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Unmapping of the MMIO range can be done as a DRM-managed action, which
will take care of the unmapping on device teardown and error paths.
This will also ensure proper ordering with respect to other DRM-managed
actions that we'll be using to clean up non-primary GTs in upcoming
patches.
We have not yet enabled any non-root GTs in the driver yet, so the
kfree() of the GT structure is effectively dead code. When we do start
enabling non-root GTs in upcoming patches, those are going to be using
DRM-managed allocations tied to the device lifetime, so we don't need to
explicitly free them (and kfree would be incorrect anyway).
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220906234934.3655440-5-matthew.d.roper@intel.com
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
The original intent of intel_uncore_mmio_debug as described in commit
0a9b26306d ("drm/i915: split out uncore_mmio_debug") was to be a
singleton structure that could be shared between multiple GTs' uncore
objects in a multi-tile system. Somehow we went off track and
started allocating separate instances of this structure for each GT,
which defeats that original goal.
But in reality, there isn't even a need to share the mmio_debug between
multiple GTs; on all modern platforms (i.e., everything after gen7)
unclaimed register accesses are something that can only be detected for
display registers. There's no point in grabbing the debug spinlock and
checking for unclaimed accesses on an uncore used by an xehpsdv or pvc
remote tile GT, or the uncore used by a mtl standalone media GT since
all of the display accesses go through the primary intel_uncore.
The simplest solution is to simply leave uncore->debug NULL on all
intel_uncore instances except for the primary one. This will allow us
to avoid the pointless debug spinlock acquisition we've been doing on
MMIO accesses coming in through these intel_uncores.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220906234934.3655440-3-matthew.d.roper@intel.com
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Sync drm-intel-next with v6.0-rc as well as recent drm-intel-gt-next.
Since drm-next does not have commit f0c70d41e4 ("drm/i915/guc: remove
runtime info printing from time stamp logging") yet, only
drm-intel-gt-next, will need to do that as part of the merge here to
build.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Invalidate TLB in batches, in order to reduce performance regressions.
Currently, every caller performs a full barrier around a TLB
invalidation, ignoring all other invalidations that may have already
removed their PTEs from the cache. As this is a synchronous operation
and can be quite slow, we cause multiple threads to contend on the TLB
invalidate mutex blocking userspace.
We only need to invalidate the TLB once after replacing our PTE to
ensure that there is no possible continued access to the physical
address before releasing our pages. By tracking a seqno for each full
TLB invalidate we can quickly determine if one has been performed since
rewriting the PTE, and only if necessary trigger one for ourselves.
That helps to reduce the performance regression introduced by TLB
invalidate logic.
[mchehab: rebased to not require moving the code to a separate file]
Cc: stable@vger.kernel.org
Fixes: 7938d61591 ("drm/i915: Flush TLBs before releasing backing store")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris.p.wilson@intel.com>
Cc: Fei Yang <fei.yang@intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/4e97ef5deb6739cadaaf40aa45620547e9c4ec06.1658924372.git.mchehab@kernel.org
(cherry picked from commit 5d36acb719)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Invalidate TLB in batches, in order to reduce performance regressions.
Currently, every caller performs a full barrier around a TLB
invalidation, ignoring all other invalidations that may have already
removed their PTEs from the cache. As this is a synchronous operation
and can be quite slow, we cause multiple threads to contend on the TLB
invalidate mutex blocking userspace.
We only need to invalidate the TLB once after replacing our PTE to
ensure that there is no possible continued access to the physical
address before releasing our pages. By tracking a seqno for each full
TLB invalidate we can quickly determine if one has been performed since
rewriting the PTE, and only if necessary trigger one for ourselves.
That helps to reduce the performance regression introduced by TLB
invalidate logic.
[mchehab: rebased to not require moving the code to a separate file]
Cc: stable@vger.kernel.org
Fixes: 7938d61591 ("drm/i915: Flush TLBs before releasing backing store")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris.p.wilson@intel.com>
Cc: Fei Yang <fei.yang@intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@kernel.org>
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/4e97ef5deb6739cadaaf40aa45620547e9c4ec06.1658924372.git.mchehab@kernel.org
Re-do what was attempted in commit 7a5c922377 ("drm/i915/gt: Split
intel-gtt functions by arch"). The goal of that commit was to split the
handlers for older hardware that depend on intel-gtt.ko so i915 can
be built for non-x86 archs, after some more patches. Other archs do not
need intel-gtt.ko.
Main issue with the previous approach: it moved all the hooks, including
the gen8, which is used by all platforms gen8 and newer. Re-do the
split moving only the handlers for gen < 6, which are the only ones
calling out to the separate module.
While at it do some minor cleanups:
- Rename the prefix s/gen5_/gmch_/ to be more accurate what platforms
are covered by intel_ggtt_gmch.c
- Remove dead code for gen12 out of needs_idle_maps()
- Remove TODO comment leftover
- Re-order if/else ladder in ggtt_probe_hw() to keep newest platforms
first
v2: Add minor cleanups (Matt Roper)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220617230559.2109427-2-lucas.demarchi@intel.com
Handling of multicast/replicated registers is spread across intel_gt.c
and intel_uncore.c today. As multicast handling and the related
steering logic gets more complicated with the addition of new platforms
and new rules it makes sense to centralize it all in one place.
For now the existing functions have been moved to the new .c/.h as-is.
Function renames and updates to operate in a more consistent manner will
be done in subsequent patches.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Acked-by: Jani Nikula <jani.nikula@linux.intel.com>
Reviewed-by: Harish Chegondi <harish.chegondi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220615001019.1821989-2-matthew.d.roper@intel.com
Ponte Vecchio no longer has MSLICE or LNCF steering, but the bspec does
document several new types of multicast register ranges. Fortunately,
most of the different MCR types all provide valid values at instance
(0,0) so there's no need to read fuse registers and calculate a
non-terminated instance. We'll lump all of those range types (BSLICE,
HALFBSLICE, TILEPSMI, CC, and L3BANK) into a single category called
"INSTANCE0" to keep things simple. We'll also perform explicit steering
for each of these multicast register types, even if the implicit
steering setup for COMPUTE/DSS ranges would have worked too; this is
based on guidance from our hardware architects who suggested that we
move away from implicit steering and start explicitly steer all MCR
register accesses on modern platforms (we'll work on transitioning
COMPUTE/DSS to explicit steering in the future).
Note that there's one additional MCR range type defined in the bspec
(SQIDI) that we don't handle here. Those ranges use a different
steering control register that we never touch; since instance 0 is also
always a valid setting there, we can just ignore those ranges.
Finally, we'll rename the HAS_MSLICES() macro to HAS_MSLICE_STEERING().
PVC hardware still has units referred to as mslices, but there's no
register steering based on mslice for this platform.
v2:
- Rebase on other recent changes
- Swap two table rows to keep table sorted & easy to read. (Harish)
Bspec: 67609
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Harish Chegondi <harish.chegondi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220608170700.4026648-1-matthew.d.roper@intel.com
As with EU masks, it's easier to store subslice/DSS masks internally in
a format that's more natural for the driver to work with, and then only
covert into the u8[] uapi form when the query ioctl is invoked. Since
the hardware design changed significantly with Xe_HP, we'll use a union
to choose between the old "hsw-style" subslice masks or the newer xehp
mask. HSW-style masks will be stored in an array of u8's, indexed by
slice (there's never more than 6 subslices per slice on older
platforms). For Xe_HP and beyond where slices no longer exist, we only
need a single bitmask. However we already know that this mask is
eventually going to grow too large for a simple u64 to hold, so we'll
represent it in a manner that can be operated on by the utilities in
linux/bitmap.h.
v2:
- Fix typo: BIT(s) -> BIT(ss) in gen9_sseu_device_status()
v3:
- Eliminate sseu->ss_stride and just calculate the stride while
specifically handling uapi. (Tvrtko)
- Use BITMAP_BITS() macro to refer to size of masks rather than
passing I915_MAX_SS_FUSE_BITS directly. (Tvrtko)
- Report compute/geometry DSS masks separately when dumping Xe_HP SSEU
info. (Tvrtko)
- Restore dropped range checks to intel_sseu_has_subslice(). (Tvrtko)
v4:
- Make the bitmap size macro check the size of the .xehp field rather
than the containing union. (Tvrtko)
- Don't add GEM_BUG_ON() intel_sseu_has_subslice()'s check for whether
slice or subslice ID exceed sseu->max_[sub]slices; various loops
in the driver are expected to exceed these, so we should just
silently return 'false.'
v5:
- Move XEHP_BITMAP_BITS() to the header so that we can also replace a
usage of I915_MAX_SS_FUSE_BITS in one of the inline functions.
(Bala)
- Change the local variable in intel_slicemask_from_xehp_dssmask() from
u16 to 'unsigned long' to make it a bit more future-proof.
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220601150725.521468-6-matthew.d.roper@intel.com
Some functions defined in the intel-gtt module are used in several
areas, but is only supported on x86 platforms.
By separating these calls and their static underlying functions to
another area, we are able to compile out these functions for
non-x86 builds and provide stubs for the non-x86 implementations.
In addition to the problematic calls, we are moving the gmch-related
functions to the new area.
Signed-off-by: Casey Bowman <casey.g.bowman@intel.com>
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220330234809.1218210-2-casey.g.bowman@intel.com