drm/nouveau/nouveau: fix page fault on device private memory

[ Upstream commit ed710a6ed7 ]

If system memory is migrated to device private memory and no GPU MMU
page table entry exists, the GPU will fault and call hmm_range_fault()
to get the PFN for the page. Since the .dev_private_owner pointer in
struct hmm_range is not set, hmm_range_fault returns an error which
results in the GPU program stopping with a fatal fault.
Fix this by setting .dev_private_owner appropriately.

Fixes: 08ddddda66 ("mm/hmm: check the device private page owner in hmm_range_fault()")
Cc: stable@vger.kernel.org
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
Ralph Campbell
2020-06-26 14:03:37 -07:00
committed by Greg Kroah-Hartman
parent b4198ecddb
commit af018a3f8d

View File

@@ -534,6 +534,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
.flags = nouveau_svm_pfn_flags, .flags = nouveau_svm_pfn_flags,
.values = nouveau_svm_pfn_values, .values = nouveau_svm_pfn_values,
.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT, .pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT,
.dev_private_owner = drm->dev,
}; };
struct mm_struct *mm = notifier->notifier.mm; struct mm_struct *mm = notifier->notifier.mm;
long ret; long ret;