Generic System Interconnect Subsystem
Introduction
This framework is designed to provide a standard kernel interface to control the settings of the interconnects on an SoC. These settings can be throughput, latency and priority between multiple interconnected devices or functional blocks. This can be controlled dynamically in order to save power or provide maximum performance.
The interconnect bus is hardware with configurable parameters, which can be set on a data path according to the requests received from various drivers. An example of interconnect buses are the interconnects between various components or functional blocks in chipsets. There can be multiple interconnects on an SoC that can be multi-tiered.
Below is a simplified diagram of a real-world SoC interconnect bus topology.
+----------------+ +----------------+
| HW Accelerator |--->| M NoC |<---------------+
+----------------+ +----------------+ |
| | +------------+
+-----+ +-------------+ V +------+ | |
| DDR | | +--------+ | PCIe | | |
+-----+ | | Slaves | +------+ | |
^ ^ | +--------+ | | C NoC |
| | V V | |
+------------------+ +------------------------+ | | +-----+
| |-->| |-->| |-->| CPU |
| |-->| |<--| | +-----+
| Mem NoC | | S NoC | +------------+
| |<--| |---------+ |
| |<--| |<------+ | | +--------+
+------------------+ +------------------------+ | | +-->| Slaves |
^ ^ ^ ^ ^ | | +--------+
| | | | | | V
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
| CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
|
+-------+
| Modem |
+-------+
Terminology
Interconnect provider is the software definition of the interconnect hardware. The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC and Mem NoC.
Interconnect node is the software definition of the interconnect hardware port. Each interconnect provider consists of multiple interconnect nodes, which are connected to other SoC components including other interconnect providers. The point on the diagram where the CPUs connect to the memory is called an interconnect node, which belongs to the Mem NoC interconnect provider.
Interconnect endpoints are the first or the last element of the path. Every endpoint is a node, but not every node is an endpoint.
Interconnect path is everything between two endpoints including all the nodes that have to be traversed to reach from a source to destination node. It may include multiple master-slave pairs across several interconnect providers.
Interconnect consumers are the entities which make use of the data paths exposed by the providers. The consumers send requests to providers requesting various throughput, latency and priority. Usually the consumers are device drivers, that send request based on their needs. An example for a consumer is a video decoder that supports various formats and image sizes.
U-Boot Implementation
The implementation is derived from the Linux 6.17 Interconnect implementation, adapted to use the U-Boot Driver Model. Under Linux the nodes are allocated via idr_alloc(), while under U-Boot they are created as icc_node devices which are children of the provider device. This provides the same lifetime by using a robust and ready to use mechanism, simplifying the implementation.
Under Linux, the nodes link is done by always allocating a new icc_node when creating a link, and when the link with the associated ID is registered it is associated to the new provider. Under U-Boot only the nodes from a provider are created at bind time, and when the node graph is traversed to calculate a path the link ID is looked dynamically amongst the node devices. This may take longer at the gain of time when registering nodes a bind time.
Since U-Boot Driver Model does on-demand device probe, the nodes and provider devices are also probed when a path is determined and removed when the path is deleted.
A test suite is present in test/dm/interconnect.c using a test driver sandbox-interconnect to exercise those U-Boot specific aspects while making sure the graph traversal and calculation are accurate.
Interconnect consumers API
Interconnect consumers are the clients which use the interconnect APIs to get paths between endpoints and set their bandwidth/latency/QoS requirements for these interconnect paths.
-
struct icc_path *of_icc_get(struct udevice *dev, const char *name)
Get an Interconnect path from a DT node based on name
Parameters
struct udevice *devThe client device.
const char *nameName of the interconnect endpoint pair.
Description
This function will search for a path between two endpoints and return an icc_path handle on success. Use icc_put() to release constraints when they are not needed anymore. If the interconnect API is disabled, NULL is returned and the consumer drivers will still build. Drivers are free to handle this specifically, but they don’t have to.
Return
icc_path pointer on success or ERR_PTR() on error. NULL is returned when the API is disabled or the “interconnects” DT property is missing.
-
struct icc_path *of_icc_get_by_index(struct udevice *dev, int idx)
Get an Interconnect path from a DT node based on index
Parameters
struct udevice *devThe client device.
int idxIndex of the interconnect endpoint pair.
Description
This function will search for a path between two endpoints and return an icc_path handle on success. Use icc_put() to release constraints when they are not needed anymore. If the interconnect API is disabled, NULL is returned and the consumer drivers will still build. Drivers are free to handle this specifically, but they don’t have to.
Return
icc_path pointer on success or ERR_PTR() on error. NULL is returned when the API is disabled or the “interconnects” DT property is missing.
Parameters
struct icc_path *pathAn interconnect path
Description
Use this function to release the constraints on a path when the path is no longer needed. The constraints will be re-aggregated.
Return
0 if OK, or a negative error code.
Parameters
struct icc_path *pathAn interconnect path
Description
This will enable all the endpoints in the path, using the bandwidth set by the icc_set_bw() call. Otherwise a zero bandwidth will be set. Usually used after a call to icc_disable().
Return
0 if OK, or a negative error code. -ENOSYS if not implemented.
Parameters
struct icc_path *pathAn interconnect path
Description
This will disable all the endpoints in the path, effectively setting a zero bandwidth. Calling icc_enable() will restore the bandwidth set by calling icc_set_bw().
Return
0 if OK, or a negative error code. -ENOSYS if not implemented.
-
int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
set bandwidth constraints on an interconnect path.
Parameters
struct icc_path *pathAn interconnect path
u32 avg_bwAverage bandwidth request in kBps
u32 peak_bwPeak bandwidth in request kBps
Description
This function is used by an interconnect consumer to express its own needs in terms of bandwidth for a previously requested path between two endpoints. The requests are aggregated and each node is updated accordingly. The entire path is locked by a mutex to ensure that the set() is completed. The path can be NULL when the “interconnects” DT properties is missing, which will mean that no constraints will be set.
Return
0 if OK, or a negative error code. -ENOSYS if not implemented.
Interconnect uclass providers API
Interconnect provider is an entity that implements methods to initialize and configure interconnect bus hardware. The interconnect provider drivers should be registered a interconnect uclass drivers.
-
struct icc_req
constraints that are attached to each node
Definition
struct icc_req {
struct hlist_node req_node;
struct icc_node *node;
bool enabled;
u32 tag;
u32 avg_bw;
u32 peak_bw;
};
Members
req_nodeentry in list of requests for the particular node
nodethe interconnect node to which this constraint applies
enabledindicates whether the path with this request is enabled
tagpath tag (optional)
avg_bwan integer describing the average bandwidth in kBps
peak_bwan integer describing the peak bandwidth in kBps
-
struct icc_path
An interconnect path
Definition
struct icc_path {
struct udevice *dev;
size_t num_nodes;
struct icc_req reqs[];
};
Members
devDevice who requested the path
num_nodesnumber of nodes (hops) in the path
reqsarray of the requests applicable to this path of nodes
-
struct icc_provider
interconnect provider (controller) entity that might provide multiple interconnect controls
Definition
struct icc_provider {
bool inter_set;
unsigned int xlate_num_nodes;
struct icc_node **xlate_nodes;
};
Members
inter_setwhether inter-provider pairs will be configured with set
xlate_num_nodesprovider-specific nodes counts for mapping nodes from phandle arguments
xlate_nodesprovider-specific array for mapping nodes from phandle arguments
-
struct icc_node
entity that is part of the interconnect topology
Definition
struct icc_node {
struct udevice *dev;
ulong *links;
size_t num_links;
int users;
struct list_head node_list;
struct list_head search_list;
struct icc_node *reverse;
u8 is_traversed:1;
struct hlist_head req_list;
u32 avg_bw;
u32 peak_bw;
void *data;
};
Members
devpoints to the interconnect provider of this node
linksa list of targets pointing to where we can go next when traversing
num_linksnumber of links to other interconnect nodes
userscount of active users
node_listthe list entry in the parent provider’s “nodes” list
search_listlist used when walking the nodes graph
reversepointer to previous node when walking the nodes graph
is_traversedflag that is used when walking the nodes graph
req_lista list of QoS constraint requests associated with this node
avg_bwaggregated value of average bandwidth requests from all consumers
peak_bwaggregated value of peak bandwidth requests from all consumers
datapointer to private data
-
struct interconnect_ops
Interconnect uclass operations
Definition
struct interconnect_ops {
struct icc_node *(*of_xlate)(struct udevice *dev, const struct ofnode_phandle_args *args);
int (*set)(struct icc_node *src, struct icc_node *dst);
void (*pre_aggregate)(struct icc_node *node);
int (*aggregate)(struct icc_node *node, u32 tag, u32 avg_bw, u32 peak_bw, u32 *agg_avg, u32 *agg_peak);
};
Members
of_xlateprovider-specific callback for mapping nodes from phandle arguments
setpointer to device specific set operation function
pre_aggregatepointer to device specific function that is called before the aggregation begins (optional)
aggregatepointer to device specific aggregate operation function
Parameters
struct udevice *devProvider device
ulong idnode id, can be a numeric ID or pointer casted to ulong
const char *namenode name
Return
icc_node pointer on success, or ERR_PTR() on error
Parameters
struct icc_node *nodesource node id
const ulong dst_iddestination node id
Description
Create a link between two nodes. The nodes might belong to different interconnect providers and the dst_id node might not exist, the link will be done at runtime in icc_path_find().
Return
0 on success, or an error code otherwise