Add a new interface _get_domain_id from dev->dma_range_map,
The iommu consumer device will use dma-ranges in dtsi node to indicate
its dma address region requirement. In this iommu driver, we will get
the requirement and decide which iova domain it should locate.
In the lastest SoC, there will be several iova-regions(domains), we will
compare and calculate which domain is right. If the start/end of device
requirement equal some region. it is best fit of course. If it is inside
some region, it is also ok. the iova requirement of a device should not
be inside two or more regions.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Tomasz Figa <tfiga@chromium.org>
Link: https://lore.kernel.org/r/20210111111914.22211-28-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
Currently domain_alloc only has a parameter(type), We have no chance to
input some special data. This patch moves the domain_finalise into
attach_device which has the device information, then could update
the domain's geometry.aperture ranges for each a device.
Strictly, I should use the data from mtk_iommu_get_m4u_data as the
parameter of mtk_iommu_domain_finalise in this patch. but dom->data
only is used in tlb ops in which the data is get from the m4u_list, thus
it is ok here.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Tomasz Figa <tfiga@chromium.org>
Link: https://lore.kernel.org/r/20210111111914.22211-25-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
In the previous SoC, the M4U HW is in the EMI power domain which is
always on. the latest M4U is in the display power domain which may be
turned on/off, thus we have to add pm_runtime interface for it.
When the engine work, the engine always enable the power and clocks for
smi-larb/smi-common, then the M4U's power will always be powered on
automatically via the device link with smi-common.
Note: we don't enable the M4U power in iommu_map/unmap for tlb flush.
If its power already is on, of course it is ok. if the power is off,
the main tlb will be reset while M4U power on, thus the tlb flush while
m4u power off is unnecessary, just skip it.
Therefore, we increase the ref_count for pm when pm status is ACTIVE,
otherwise, skip it. Meanwhile, the tlb_flush_range is called so often,
thus, update pm ref_count while the SoC has power-domain to avoid touch the
dev->power.lock. and the tlb_flush_all only is called when boot, so no
need check if the SoC has power-domain to keep code clean.
There will be one case that pm runctime status is not expected when tlb
flush. After boot, the display may call dma_alloc_attrs before it call
pm_runtime_get(disp-dev), then the m4u's pm status is not active inside
the dma_alloc_attrs. Since it only happens after boot, the tlb is clean
at that time, I also think this is ok.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Tomasz Figa <tfiga@chromium.org>
Link: https://lore.kernel.org/r/20210111111914.22211-21-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
In pm runtime case, all the registers backup/restore and bclk are
controlled in the pm_runtime callback, Rename the original
suspend/resume to the runtime_suspend/resume.
Use pm_runtime_force_suspend/resume as the normal suspend/resume.
iommu should suspend after iommu consumer devices, thus use _LATE_.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Tomasz Figa <tfiga@chromium.org>
Link: https://lore.kernel.org/r/20210111111914.22211-20-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
In the lastest SoC, M4U has its special power domain. thus, If the engine
begin to work, it should help enable the power for M4U firstly.
Currently if the engine work, it always enable the power/clocks for
smi-larbs/smi-common. This patch adds device_link for smi-common and M4U.
then, if smi-common power is enabled, the M4U power also is powered on
automatically.
Normally M4U connect with several smi-larbs and their smi-common always
are the same, In this patch it get smi-common dev from the last smi-larb
device, then add the device_link.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Tomasz Figa <tfiga@chromium.org>
Link: https://lore.kernel.org/r/20210111111914.22211-19-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
In current iommu_unmap, this code is:
iommu_iotlb_gather_init(&iotlb_gather);
ret = __iommu_unmap(domain, iova, size, &iotlb_gather);
iommu_iotlb_sync(domain, &iotlb_gather);
We could gather the whole iova range in __iommu_unmap, and then do tlb
synchronization in the iommu_iotlb_sync.
This patch implement this, Gather the range in mtk_iommu_unmap.
then iommu_iotlb_sync call tlb synchronization for the gathered iova range.
we don't call iommu_iotlb_gather_add_page since our tlb synchronization
could be regardless of granule size.
In this way, gather->start is impossible ULONG_MAX, remove the checking.
This patch aims to do tlb synchronization *once* in the iommu_unmap.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210107122909.16317-7-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
Currently gather->end is "unsigned long" which may be overflow in
arch32 in the corner case: 0xfff00000 + 0x100000(iova + size).
Although it doesn't affect the size(end - start), it affects the checking
"gather->end < end"
This patch changes this "end" to the real end address
(end = start + size - 1). Correspondingly, update the length to
"end - start + 1".
Fixes: a7d20dc19d ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
The only user of tlb_flush_leaf is a particularly hairy corner of the
Arm short-descriptor code, which wants a synchronous invalidation to
minimise the races inherent in trying to split a large page mapping.
This is already far enough into "here be dragons" territory that no
sensible caller should ever hit it, and thus it really doesn't need
optimising. Although using tlb_flush_walk there may technically be
more heavyweight than needed, it does the job and saves everyone else
having to carry around useless baggage.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/9844ab0c5cb3da8b2f89c6c2da16941910702b41.1606324115.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
The of_device_id is included unconditionally by of.h header and used
in the driver as well. Remove of_match_ptr to fix W=1 compile test
warning with !CONFIG_OF:
drivers/iommu/mtk_iommu.c:833:34: warning: 'mtk_iommu_of_ids' defined but not used [-Wunused-const-variable=]
833 | static const struct of_device_id mtk_iommu_of_ids[] = {
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Link: https://lore.kernel.org/r/20200727181842.8441-1-krzk@kernel.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Some platforms(ex: mt6779) need to improve performance by setting
REG_MMU_WR_LEN_CTRL register. And we can use WR_THROT_EN macro to control
whether we need to set the register. If the register uses default value,
iommu will send command to EMI without restriction, when the number of
commands become more and more, it will drop the EMI performance. So when
more than ten_commands(default value) don't be handled for EMI, iommu will
stop send command to EMI for keeping EMI's performace by enabling write
throttling mechanism(bit[5][21]=0) in MMU_WR_LEN_CTRL register.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-8-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The max larb number that a iommu HW support is 8(larb0~larb7 in the below
diagram).
If the larb's number is over 8, we use a sub_common for merging
several larbs into one larb. At this case, we will extend larb_id:
bit[11:9] means common-id;
bit[8:7] means subcommon-id;
>From these two variables, we could get the real larb number when
translation fault happen.
The diagram is as below:
EMI
|
IOMMU
|
-----------------
| |
common1 common0
| |
-----------------
|
smi common
|
------------------------------------
| | | | | |
3'd0 3'd1 3'd2 3'd3 ... 3'd7 <-common_id(max is 8)
| | | | | |
Larb0 Larb1 | Larb3 ... Larb7
|
smi sub common
|
--------------------------
| | | |
2'd0 2'd1 2'd2 2'd3 <-sub_common_id(max is 4)
| | | |
Larb8 Larb9 Larb10 Larb11
In this patch we extend larb_remap[] to larb_remap[8][4] for this.
larb_remap[x][y]: x means common-id above, y means subcommon_id above.
We can also distinguish if the M4U HW has sub_common by HAS_SUB_COMM
macro.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-7-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Add F_MMU_IN_ORDER_WR_EN_MASK and F_MMU_STANDARD_AXI_MODE_EN_MASK
definitions in MISC_CTRL register.
F_MMU_STANDARD_AXI_MODE_EN_MASK:
If we set F_MMU_STANDARD_AXI_MODE_EN_MASK (bit[3][19] = 0, not follow
standard AXI protocol), the iommu will priorize sending of urgent read
command over a normal read command. This improves the performance.
F_MMU_IN_ORDER_WR_EN_MASK:
If we set F_MMU_IN_ORDER_WR_EN_MASK (bit[1][17] = 0, out-of-order write),
the iommu will re-order write commands and send the write commands with
higher priority. Otherwise the sending of write commands will be done in
order. The feature is controlled by OUT_ORDER_WR_EN platform data flag.
Suggested-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-5-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
For iommu offset=0x48 register, only the previous mt8173/mt8183 use the
name STANDARD_AXI_MODE, all the latest SoC extend the register more
feature by different bits, for example: axi_mode, in_order_en, coherent_en
and so on. So rename REG_MMU_MISC_CTRL may be more proper.
This patch only rename the register name, no functional change.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-3-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
TTBR1 values have so far been redundant since no users implement any
support for split address spaces. Crucially, though, one of the main
reasons for wanting to do so is to be able to manage each half entirely
independently, e.g. context-switching one set of mappings without
disturbing the other. Thus it seems unlikely that tying two tables
together in a single io_pgtable_cfg would ever be particularly desirable
or useful.
Streamline the configs to just a single conceptual TTBR value
representing the allocated table. This paves the way for future users to
support split address spaces by simply allocating a table and dealing
with the detailed TTBRn logistics themselves.
Tested-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[will: Drop change to ttbr value]
Signed-off-by: Will Deacon <will@kernel.org>
Reduce the tlb timeout value from 100000us to 1000us. The original value
would make the kernel stuck for 100 ms with interrupts disabled, which
could have other side effects. The flush is expected to always take much
less than 1 ms, so use that instead.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Now we have tlb_lock for the HW tlb flush, then pgtable code hasn't
needed the external "pgtlock" for a while. this patch remove the
"pgtlock".
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Right now, the tlb_add_flush_nosync and tlb_sync always appear together.
we merge the two functions into one(also move the tlb_lock into the new
function). No functional change.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
In our tlb range flush, we don't care the "leaf". Remove it to simplify
the code. no functional change.
"granule" also is unnecessary for us, Keep it satisfy the format of
tlb_flush_walk.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Use the iommu_gather mechanism to achieve the tlb range flush.
Gather the iova range in the "tlb_add_page", then flush the merged iova
range in iotlb_sync.
Suggested-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The commit 4d689b6194 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API
TLB sync") help move the tlb_sync of unmap from v7s into the iommu
framework. It helps add a new function "mtk_iommu_iotlb_sync", But it
lacked the lock, then it will cause the variable "tlb_flush_active"
may be changed unexpectedly, we could see this warning log randomly:
mtk-iommu 10205000.iommu: Partial TLB flush timed out, falling back to
full flush
The HW requires tlb_flush/tlb_sync in pairs strictly, this patch adds
a new tlb_lock for tlb operations to fix this issue.
Fixes: 4d689b6194 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Use the correct tlb_flush_all instead of the original one.
Fixes: 4d689b6194 ("iommu/io-pgtable-arm-v7s: Convert to IOMMU API TLB sync")
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.
The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
should probably have had a "might_sleep()" since it was written. However
currently the dma-iommu api can call iommu_map in an atomic context,
which it shouldn't do. This doesn't cause any problems because any iommu
driver which uses the dma-iommu api uses gfp_atomic in it's
iommu_ops::map function. But doing this wastes the memory allocators
atomic pools.
Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Remove the "struct mtk_smi_iommu" to simplify the code since it has only
one item in it right now.
Signed-off-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>