建立工程,成功创建两个虚拟串口
This commit is contained in:
58
source/OpenAMP/open-amp/docs/apps/echo_test/README.md
Normal file
58
source/OpenAMP/open-amp/docs/apps/echo_test/README.md
Normal file
@@ -0,0 +1,58 @@
|
||||
|
||||
# echo_test
|
||||
This readme is about the OpenAMP echo_test demo.
|
||||
The echo_test is about one processor sends message to the other one, and the other one echo back the message. The processor which sends the message will verify the echo message.
|
||||
|
||||
For now, it implements Linux sends the message, and the baremetal echos back.
|
||||
|
||||
## Compilation
|
||||
|
||||
### Baremetal Compilation
|
||||
Option `WITH_ECHO_TEST` is to control if the application will be built.
|
||||
By default this option is `ON` when `WITH_APPS` is on.
|
||||
|
||||
Here is an example:
|
||||
|
||||
```
|
||||
$ cmake ../open-amp -DCMAKE_TOOLCHAIN_FILE=zynq7_generic -DWITH_OBSOLETE=on -DWITH_APPS=ON
|
||||
```
|
||||
|
||||
### Linux Compilation
|
||||
|
||||
#### Linux Kernel Compilation
|
||||
You will need to manually compile the following kernel modules with your Linux kernel (Please refer to Linux kernel documents for how to add kernel module):
|
||||
|
||||
* Your machine's remoteproc kernel driver
|
||||
* `obsolete/apps/echo_test/system/linux/kernelspace/rpmsg_user_dev_driver` if you want to run the echo_test app in Linux user space.
|
||||
* `obsolete/system/linux/kernelspace/rpmsg_echo_test_kern_app` if you want to run the echo_test app in Linux kernel space.
|
||||
|
||||
#### Linux Userspace Compliation
|
||||
* Compile `obsolete/apps/echo_test/system/linux/userspace/echo_test` into your Linux OS.
|
||||
* If you are running generic(baremetal) system as remoteproc slave, and Linux as remoteproc master, please also add the built generic `echo_test` executable to the firmware of your Linux OS.
|
||||
|
||||
## Run the Demo
|
||||
|
||||
### Load the Demo
|
||||
After Linux boots,
|
||||
* Load the machine remoteproc. If Linux runs as remoteproc master, you will need to pass the other processor's echo_test binary as firmware arguement to the remoteproc module.
|
||||
* If you run the Linux kernel application demo, load the `rpmsg_echo_test_kern_app` module. You will see the kernel application send the message to remote and the remote reply back and the kernel application will verify the result.
|
||||
* If you run the userspace application demo, load the `rpmsg_user_dev_driver` module.
|
||||
* If you run the userspace application demo, you will see the similar output on the console:
|
||||
```
|
||||
****************************************
|
||||
Please enter command and press enter key
|
||||
****************************************
|
||||
1 - Send data to remote core, retrieve the echo and validate its integrity ..
|
||||
2 - Quit this application ..
|
||||
CMD>
|
||||
```
|
||||
* Input `1` to send packages.
|
||||
* Input `2` to exit the application.
|
||||
|
||||
After you run the demo, you will need to unload the kernel modules.
|
||||
|
||||
### Unload the Demo
|
||||
* If you run the userspace application demo, unload the `rpmsg_user_dev_driver` module.
|
||||
* If you run the kernelspace application demo, unload the `rpmsg_echo_test_kern_app` module.
|
||||
* Unload the machine remoteproc driver.
|
||||
|
||||
59
source/OpenAMP/open-amp/docs/apps/matrix_multiply/README.md
Normal file
59
source/OpenAMP/open-amp/docs/apps/matrix_multiply/README.md
Normal file
@@ -0,0 +1,59 @@
|
||||
|
||||
# matrix_multiply
|
||||
This readme is about the OpenAMP matrix_multiply demo.
|
||||
The matrix_multiply is about one processor generates two matrices, and send them to the one, and the other one calcuate the matrix multiplicaiton and return the result matrix.
|
||||
|
||||
For now, it implements Linux generates the matrices, and the baremetal calculate the matrix mulitplication and send back the result.
|
||||
|
||||
## Compilation
|
||||
|
||||
### Baremetal Compilation
|
||||
Option `WITH_MATRIX_MULTIPLY` is to control if the application will be built.
|
||||
By default this option is `ON` when `WITH_APPS` is on.
|
||||
|
||||
Here is an example:
|
||||
|
||||
```
|
||||
$ cmake ../open-amp -DCMAKE_TOOLCHAIN_FILE=zynq7_generic -DWITH_OBSOLETE=on -DWITH_APPS=ON
|
||||
```
|
||||
|
||||
### Linux Compilation
|
||||
|
||||
#### Linux Kernel Compilation
|
||||
You will need to manually compile the following kernel modules with your Linux kernel (Please refer to Linux kernel documents for how to add kernel module):
|
||||
|
||||
* Your machine's remoteproc kernel driver
|
||||
* `obsolete/system/linux/kernelspace/rpmsg_user_dev_driver` if you want to run the matrix_multiply app in Linux user space.
|
||||
* `obsolete/apps/matrix_multiply/system/linux/kernelspace/rpmsg_mat_mul_kern_app` if you want to run the matrix_multiply app in Linux kernel space.
|
||||
|
||||
#### Linux Userspace Compliation
|
||||
* Compile `obsolete/apps/matrix_multiply/system/linux/userspace/mat_mul_demo` into your Linux OS.
|
||||
* If you are running generic(baremetal) system as remoteproc slave, and Linux as remoteproc master, please also add the built generic `matrix_multiply` executable to the firmware of your Linux OS.
|
||||
|
||||
## Run the Demo
|
||||
|
||||
### Load the Demo
|
||||
After Linux boots,
|
||||
* Load the machine remoteproc. If Linux runs as remoteproc master, you will need to pass the other processor's matrix_multiply binary as firmware arguement to the remoteproc module.
|
||||
* If you run the Linux kernel application demo, load the `rpmsg_mat_mul_kern_app` module, you will see the kernel app will generate two matrices to the other processor, and output the result matrix returned by the other processor.
|
||||
* If you run the userspace application demo, load the `rpmsg_user_dev_driver` module.
|
||||
* If you run the userspace application demo `mat_mul_demo`, you will see the similar output on the console:
|
||||
```
|
||||
****************************************
|
||||
Please enter command and press enter key
|
||||
****************************************
|
||||
1 - Generates random 6x6 matrices and transmits them to remote core over rpmsg
|
||||
..
|
||||
2 - Quit this application ..
|
||||
CMD>
|
||||
```
|
||||
* Input `1` to run the matrix multiplication.
|
||||
* Input `2` to exit the application.
|
||||
|
||||
After you run the demo, you will need to unload the kernel modules.
|
||||
|
||||
### Unload the Demo
|
||||
* If you run the userspace application demo, unload the `rpmsg_user_dev_driver` module.
|
||||
* If you run the kernelspace application demo, unload the `rpmsg_mat_mul_kern_app` module.
|
||||
* Unload the machine remoteproc driver.
|
||||
|
||||
38
source/OpenAMP/open-amp/docs/apps/rpc_demo/README.md
Normal file
38
source/OpenAMP/open-amp/docs/apps/rpc_demo/README.md
Normal file
@@ -0,0 +1,38 @@
|
||||
|
||||
# rpc_demo
|
||||
This readme is about the OpenAMP rpc_demo demo.
|
||||
The rpc_demo is about one processor uses the UART on the other processor and create file on the other processor's filesystem with file operations.
|
||||
|
||||
For now, It implements the processor running generic(baremetal) applicaiton access the devices on the Linux.
|
||||
|
||||
## Compilation
|
||||
|
||||
### Baremetal Compilation
|
||||
Option `WITH_RPC_DEMO` is to control if the application will be built.
|
||||
By default this option is `ON` when `WITH_APPS` is on.
|
||||
|
||||
Here is an example:
|
||||
|
||||
```
|
||||
$ cmake ../open-amp -DCMAKE_TOOLCHAIN_FILE=zynq7_generic -DWITH_OBSOLETE=on -DWITH_APPS=ON
|
||||
```
|
||||
|
||||
### Linux Compilation
|
||||
|
||||
#### Linux Kernel Compilation
|
||||
You will need to manually compile the following kernel modules with your Linux kernel (Please refer to Linux kernel documents for how to add kernel module):
|
||||
|
||||
* Your machine's remoteproc kernel driver
|
||||
* `obsolete/apps/rpc_demo/system/linux/kernelspace/rpmsg_proxy_dev_driver`
|
||||
|
||||
#### Linux Userspace Compliation
|
||||
* Compile `obsolete/apps/rpc_demo/system/linux/userspace/proxy_app` into your Linux OS.
|
||||
* Add the built generic `rpc_demo` executable to the firmware of your Linux OS.
|
||||
|
||||
## Run the Demo
|
||||
After Linux boots, run `proxy_app` as follows:
|
||||
```
|
||||
# proxy_app [-m REMOTEPROC_MODULE] [-f PATH_OF_THE_RPC_DEMO_FIRMWARE]
|
||||
```
|
||||
|
||||
The demo application will load the remoteproc module, then the proxy rpmsg module, will output message sent from the other processor, send the console input back to the other processor. When the demo application exits, it will unload the kernel modules.
|
||||
168
source/OpenAMP/open-amp/docs/data-structure.md
Normal file
168
source/OpenAMP/open-amp/docs/data-structure.md
Normal file
@@ -0,0 +1,168 @@
|
||||
Libmetal helper data struct
|
||||
===========================
|
||||
```
|
||||
struct metal_io_region {
|
||||
char name[64]; /**< I/O region name */
|
||||
void *virt; /**< base virtual address */
|
||||
const metal_phys_addr_t *physmap; /**< table of base physical address
|
||||
of each of the pages in the I/O
|
||||
region */
|
||||
size_t size; /**< size of the I/O region */
|
||||
unsigned long page_shift; /**< page shift of I/O region */
|
||||
metal_phys_addr_t page_mask; /**< page mask of I/O region */
|
||||
unsigned int mem_flags; /**< memory attribute of the
|
||||
I/O region */
|
||||
struct metal_io_ops ops; /**< I/O region operations */
|
||||
};
|
||||
|
||||
|
||||
/** Libmetal device structure. */
|
||||
struct metal_device {
|
||||
const char *name; /**< Device name */
|
||||
struct metal_bus *bus; /**< Bus that contains device */
|
||||
unsigned num_regions; /**< Number of I/O regions in
|
||||
device */
|
||||
struct metal_io_region regions[METAL_MAX_DEVICE_REGIONS]; /**< Array of
|
||||
I/O regions in device*/
|
||||
struct metal_list node; /**< Node on bus' list of devices */
|
||||
int irq_num; /**< Number of IRQs per device */
|
||||
void *irq_info; /**< IRQ ID */
|
||||
};
|
||||
```
|
||||
|
||||
Remoteproc data struct
|
||||
===========================
|
||||
```
|
||||
struct remoteproc {
|
||||
struct metal_device dev; /**< Each remoteproc has a device, each device knows its memories regions */
|
||||
metal_mutex_t lock; /**< mutex lock */
|
||||
void *rsc_table; /**< pointer to resource table */
|
||||
size_t rsc_len; /**< length of the resoruce table */
|
||||
struct remoteproc_ops *ops; /**< pointer to remoteproc operation */
|
||||
metal_phys_addr_t bootaddr; /**< boot address */
|
||||
struct loader_ops *loader_ops; /**< image loader operation */
|
||||
unsigned int state; /**< remoteproc state */
|
||||
struct metal_list vdevs; /**< list of vdevs (can we limited to one for code size but linux and resource table supports multiple */
|
||||
void *priv; /**< remoteproc private data */
|
||||
};
|
||||
|
||||
struct remoteproc_vdev {
|
||||
struct metal_list node; /**< node */
|
||||
struct remoteproc *rproc; /**< pointer to the remoteproc instance */
|
||||
struct virtio_dev; /**< virtio device */
|
||||
uint32_t notify_id; /**< virtio device notification ID */
|
||||
void *vdev_rsc; /**< pointer to the vdev space in resource table */
|
||||
struct metal_io_region *vdev_io; /**< pointer to the vdev space I/O region */
|
||||
int vrings_num; /**< number of vrings */
|
||||
struct rproc_vrings[1]; /**< vrings array */
|
||||
};
|
||||
|
||||
struct remoteproc_vring {
|
||||
struct remoteproc_vdev *rpvdev; /**< pointer to the remoteproc vdev */
|
||||
uint32_t notify_id; /**< vring notify id */
|
||||
size_t len; /**< vring length */
|
||||
uint32_t alignment; /**< vring alignment */
|
||||
void *va; /**< vring start virtual address */
|
||||
struct metal_io_region *io; /**< pointer to the vring I/O region */
|
||||
};
|
||||
```
|
||||
|
||||
Virtio Data struct
|
||||
===========================
|
||||
```
|
||||
struct virtio_dev {
|
||||
int index; /**< unique position on the virtio bus */
|
||||
struct virtio_device_id id; /**< the device type identification (used to match it with a driver). */
|
||||
struct metal_device *dev; /**< do we need this in virtio device ? */
|
||||
metal_spinlock lock; /**< spin lock */
|
||||
uint64_t features; /**< the features supported by both ends. */
|
||||
unsigned int role; /**< if it is virtio backend or front end. */
|
||||
void (*rst_cb)(struct virtio_dev *vdev); /**< user registered virtio device callback */
|
||||
void *priv; /**< pointer to virtio_dev private data */
|
||||
int vrings_num; /**< number of vrings */
|
||||
struct virtqueue vqs[1]; /**< array of virtqueues */
|
||||
};
|
||||
|
||||
struct virtqueue {
|
||||
char vq_name[VIRTQUEUE_MAX_NAME_SZ]; /**< virtqueue name */
|
||||
struct virtio_device *vdev; /**< pointer to virtio device */
|
||||
uint16_t vq_queue_index;
|
||||
uint16_t vq_nentries;
|
||||
uint32_t vq_flags;
|
||||
int vq_alignment;
|
||||
int vq_ring_size;
|
||||
boolean vq_inuse;
|
||||
void *vq_ring_mem;
|
||||
void (*callback) (struct virtqueue * vq); /**< virtqueue callback */
|
||||
void (*notify) (struct virtqueue * vq); /**< virtqueue notify remote function */
|
||||
int vq_max_indirect_size;
|
||||
int vq_indirect_mem_size;
|
||||
struct vring vq_ring;
|
||||
uint16_t vq_free_cnt;
|
||||
uint16_t vq_queued_cnt;
|
||||
struct metal_io_region *buffers_io; /**< buffers shared memory */
|
||||
|
||||
/*
|
||||
* Head of the free chain in the descriptor table. If
|
||||
* there are no free descriptors, this will be set to
|
||||
* VQ_RING_DESC_CHAIN_END.
|
||||
*/
|
||||
uint16_t vq_desc_head_idx;
|
||||
|
||||
/*
|
||||
* Last consumed descriptor in the used table,
|
||||
* trails vq_ring.used->idx.
|
||||
*/
|
||||
uint16_t vq_used_cons_idx;
|
||||
|
||||
/*
|
||||
* Last consumed descriptor in the available table -
|
||||
* used by the consumer side.
|
||||
*/
|
||||
uint16_t vq_available_idx;
|
||||
|
||||
uint8_t padd;
|
||||
/*
|
||||
* Used by the host side during callback. Cookie
|
||||
* holds the address of buffer received from other side.
|
||||
* Other fields in this structure are not used currently.
|
||||
* Do we needed??/
|
||||
struct vq_desc_extra {
|
||||
void *cookie;
|
||||
struct vring_desc *indirect;
|
||||
uint32_t indirect_paddr;
|
||||
uint16_t ndescs;
|
||||
} vq_descx[0];
|
||||
};
|
||||
|
||||
struct vring {
|
||||
unsigned int num; /**< number of buffers of the vring */
|
||||
struct vring_desc *desc;
|
||||
struct vring_avail *avail;
|
||||
struct vring_used *used;
|
||||
};
|
||||
```
|
||||
RPMsg Data struct
|
||||
===========================
|
||||
```
|
||||
struct rpmsg_virtio_device {
|
||||
struct virtio_dev *vdev; /**< pointer to the virtio device */
|
||||
struct virtqueue *rvq; /**< pointer to receive virtqueue */
|
||||
struct virtqueue *svq; /**< pointer to send virtqueue */
|
||||
int buffers_number; /**< number of shared buffers */
|
||||
struct metal_io_region *shbuf_io; /**< pointer to the shared buffer I/O region */
|
||||
void *shbuf;
|
||||
int (*new_endpoint_cb)(const char *name, uint32_t addr); /**< name service announcement user designed callback which is used for when there is a name service announcement, there is no local endpoints waiting to bind */
|
||||
struct metal_list endpoints; /**< list of endpoints */
|
||||
};
|
||||
|
||||
struct rpmsg_endpoint {
|
||||
char name[SERVICE_NAME_SIZE];
|
||||
struct rpmsg_virtio_dev *rvdev; /**< pointer to the RPMsg virtio device */
|
||||
uint32_t addr; /**< endpoint local address */
|
||||
uint32_t dest_addr; /**< endpoint default target address */
|
||||
int (*cb)(struct rpmsg_endpoint *ept, void *data, struct metal_io_region *io, size_t len, uint32_t addr); /**< endpoint callback */
|
||||
void (*destroy)(struct rpmsg_endpoint *ept); /**< user registerd endpoint destory callback */
|
||||
/* Whether we need another callback for ack ns announcement? */
|
||||
};
|
||||
```
|
||||
@@ -0,0 +1,97 @@
|
||||
// RPMsg dynamic endpoint creation
|
||||
|
||||
digraph G {
|
||||
rankdir="LR";
|
||||
|
||||
subgraph roles {
|
||||
node [style="filled", fillcolor="lightblue"];
|
||||
master [label="Master"];
|
||||
slave [label="Slave"];
|
||||
}
|
||||
|
||||
subgraph m_comment_nodes {
|
||||
node [group=m_comment, shape="note", style="filled", fillcolor="yellow"];
|
||||
rank="same";
|
||||
m_remoteproc_init_comment [label="this is initialize rproc call"];
|
||||
m_remoteproc_boot_comment [label="it will setup vdev before booting the remote"];
|
||||
m_rpmsg_vdev_init_comment [label="\l* It will initialize vrings with the shared memory\l* If vdev supports name service, it will create name service endpoint;\l* it sets vdev status to DRVIER_READY, And will notify remote.\l"];
|
||||
m_rpmsg_create_ep_comment [label="\if vdev supports name service,\lit will send out name service.\l"];
|
||||
m_rpmsg_send_comment [label="\lIf endpoint hasn't binded, it fail\lreturn failure to indicate ep hasn't been binded.\l"];
|
||||
|
||||
}
|
||||
|
||||
subgraph m_flow_nodes {
|
||||
node [shape="box"];
|
||||
rank="same";
|
||||
m_remoteproc_init [label="rproc = remoteproc_init(&remoteproc_ops, &arg);"]
|
||||
m_remoteproc_load [label="calls remoteproc_load() to load applicaiton"];
|
||||
m_remoteproc_boot [shape="box", label="ret=remoteproc_boot(&rproc)"];
|
||||
m_remoteproc_get_vdev [label="vdev=remoteproc_create_virtio(rproc, rpmsg_vdev_id, MASTER, NULL);"];
|
||||
m_rpmsg_shmpool_init[label="rpmsg_virtio_init_shm_pool(shpool, shbuf, shbuf_pool_size);"];
|
||||
m_rpmsg_vdev_init [label="rpdev=rpmsg_init_vdev(rpvdev, vdev, ns_bind_cb, &shm_io, shpool);"];
|
||||
m_rpmsg_ns_cb [label="\lrpmsg_ns_callback() will see if there is a local ep registered.\lIf yes, bind the ep; otherwise, call ns_bind_cb.\l"];
|
||||
m_rpmsg_create_ep [label="\lept=rpmsg_create_ept(ept, rdev, ept_name, ept_addr, dest_addr, \lendpoint_cb, ns_unbind_cb);\l"];
|
||||
m_rpmsg_send [label="rpmsg_send(ept,data)"];
|
||||
m_rpmsg_rx_cb [label="rpmsg_rx_callback()"];
|
||||
m_ep_cb [label="endpoint_cb(ept, data, size, src_addr)"];
|
||||
m_rpmsg_destroy_ep [label="rpmsg_destroy_endpoint(ept)"];
|
||||
|
||||
m_remoteproc_init -> m_remoteproc_load -> m_remoteproc_boot -> m_remoteproc_get_vdev ->
|
||||
m_rpmsg_shmpool_init -> m_rpmsg_vdev_init -> m_rpmsg_create_ep -> m_rpmsg_ns_cb -> m_rpmsg_send;
|
||||
m_rpmsg_send -> m_rpmsg_rx_cb -> m_ep_cb ->
|
||||
m_rpmsg_destroy_ep [dir="none", style="dashed"];
|
||||
}
|
||||
|
||||
subgraph s_flow_nodes {
|
||||
rank="same";
|
||||
node [shape="box"];
|
||||
s_remoteproc_init [label="rproc = remoteproc_init(&remoteproc_ops, &arg);"];
|
||||
|
||||
s_remoteproc_parse_rsc [label="ret = remoteproc_set_rsc_table(rproc, &rsc_table, rsc_size)"];
|
||||
s_remoteproc_get_vdev [label="vdev=remoteproc_create_virtio(rproc, rpmsg_vdev_id, SLAVE, rst_cb);"];
|
||||
s_rpmsg_vdev_init [label="rpdev=rpmsg_init_vdev(rpvdev, vdev, ns_bind_cb, &shm_io, NULL);"];
|
||||
s_rpmsg_ns_cb [label="\lrpmsg_ns_callback() will see if there is a local ep registered.\lIf yes, bind the ep; otherwise, it will call ns_bind_cb()."];
|
||||
s_rpmsg_ns_bind_cb [label="s_rpsmg_ns_bind_cb(ept_name, remote_addr)"];
|
||||
s_rpmsg_create_ep [label="\lept=rpmsg_create_endpoint(ept, rdev, ept_name, ept_addr, remote_addr,\lendpoint_cb, ns_unbind_cb);\l"];
|
||||
s_rpmsg_send [label="rpmsg_send(ept,data)"];
|
||||
s_rpmsg_rx_cb [label="rpmsg_rx_callback()"];
|
||||
s_ep_cb [label="endpoint_cb(ept, data, size, src_addr)"];
|
||||
s_rpmsg_ns_unbind_cb [label="\lrpmsg_ns_callback() will call the previous\lregistered endpoint unbind callback\l"];
|
||||
|
||||
s_remoteproc_init -> s_remoteproc_parse_rsc -> s_remoteproc_get_vdev ->
|
||||
s_rpmsg_vdev_init -> s_rpmsg_ns_cb -> s_rpmsg_ns_bind_cb ->
|
||||
s_rpmsg_create_ep;
|
||||
s_rpmsg_create_ep-> s_rpmsg_rx_cb -> s_ep_cb -> s_rpmsg_send ->
|
||||
s_rpmsg_ns_unbind_cb [dir="none", style="dash"];
|
||||
|
||||
}
|
||||
|
||||
subgraph s_comment_nodes {
|
||||
node [group=s_comment, shape="note", style="filled", fillcolor="yellow"];
|
||||
rank="same";
|
||||
s_rpmsg_vdev_init_comment [label="\l* If vdev supports name service, it will create name service endpoint;\l* It will not return until the master set status to DRIVER READY\l"];
|
||||
s_rpmsg_rx_cb_comment [label="\l* It will look for the endpoint which matches the destination address.\lIf the two endpoints hasn't binded yet,\lit will set the local endpoint's destination address with the source address in the message\l"];
|
||||
}
|
||||
|
||||
master -> m_remoteproc_init [dir="none"];
|
||||
slave -> s_remoteproc_init [dir="none"];
|
||||
s_rpmsg_create_ep -> m_rpmsg_ns_cb [label="NS annoucement"];
|
||||
m_rpmsg_create_ep -> s_rpmsg_ns_cb [label="NS annoucement"];
|
||||
m_rpmsg_send -> s_rpmsg_rx_cb [label="RPMsg data"];
|
||||
s_rpmsg_send -> m_rpmsg_rx_cb [label="RPMsg data"];
|
||||
m_rpmsg_destroy_ep -> s_rpmsg_ns_unbind_cb [label="Endpoint destroy NS"];
|
||||
|
||||
m_remoteproc_init_comment -> m_remoteproc_init [dir="none"];
|
||||
m_remoteproc_boot_comment -> m_remoteproc_boot [dir="none"];
|
||||
m_rpmsg_vdev_init_comment -> m_rpmsg_vdev_init [dir="none"];
|
||||
m_rpmsg_create_ep_comment -> m_rpmsg_create_ep [dir="none"];
|
||||
m_rpmsg_send_comment -> m_rpmsg_send [dir="none"];
|
||||
|
||||
s_rpmsg_vdev_init -> s_rpmsg_vdev_init_comment [dir="none"];
|
||||
s_rpmsg_rx_cb -> s_rpmsg_rx_cb_comment [dir="none"];
|
||||
|
||||
{rank=same; master; m_remoteproc_init}
|
||||
{rank=same; slave; s_remoteproc_init}
|
||||
|
||||
}
|
||||
|
||||
95
source/OpenAMP/open-amp/docs/img-src/coprocessor-rpmsg-ns.gv
Normal file
95
source/OpenAMP/open-amp/docs/img-src/coprocessor-rpmsg-ns.gv
Normal file
@@ -0,0 +1,95 @@
|
||||
// RPMsg dynamic endpoints binding
|
||||
|
||||
digraph G {
|
||||
rankdir="LR";
|
||||
|
||||
subgraph roles {
|
||||
node [style="filled", fillcolor="lightblue"];
|
||||
master [label="Master"];
|
||||
slave [label="Slave"];
|
||||
}
|
||||
|
||||
subgraph m_comment_nodes {
|
||||
node [group=m_comment, shape="note", style="filled", fillcolor="yellow"];
|
||||
rank="same";
|
||||
m_remoteproc_init_comment [label="this is initialize rproc call"];
|
||||
m_remoteproc_boot_comment [label="it will setup vdev before booting the remote"];
|
||||
m_rpmsg_vdev_init_comment [label="\l* It will initialize vrings with the shared memory\l* As vdev doesn't support name service, it will not create name service endpoint;\l* it sets vdev status to DRVIER_READY, And will notify remote.\l"];
|
||||
m_rpmsg_create_ep_comment [label="\lIf vdev supports name service,\lit will send out name service.\l"];
|
||||
m_rpmsg_send_comment [label="\lIf endpoint hasn't binded, it fail\lreturn failure to indicate ep hasn't been binded.\l"];
|
||||
|
||||
}
|
||||
|
||||
subgraph m_flow_nodes {
|
||||
node [shape="box"];
|
||||
rank="same";
|
||||
m_remoteproc_init [label="rproc = remoteproc_init(&remoteproc_ops, &arg);"]
|
||||
m_remoteproc_load [label="calls remoteproc_load() to load applicaiton"];
|
||||
m_remoteproc_boot [shape="box", label="ret=remoteproc_boot(&rproc)"];
|
||||
m_remoteproc_get_vdev [label="vdev=remoteproc_create_virtio(rproc, rpmsg_vdev_id, MASTER, NULL);"];
|
||||
m_rpmsg_shmpool_init[label="rpmsg_virtio_init_shm_pool(shpool, shbuf, shbuf_pool_size);"];
|
||||
m_rpmsg_vdev_init [label="rpdev=rpmsg_init_vdev(rpvdev, vdev, ns_bind_cb, &shm_io, shpool);"];
|
||||
m_rpmsg_ns_cb [label="\lrpmsg_ns_callback() will see if there is a local ep registered.\lIf yes, bind the ep; otherwise, call ns_bind_cb.\l"];
|
||||
m_rpmsg_create_ep [label="\lept=rpmsg_create_ept(ept, rdev, ept_name, ept_addr, dest_addr, \lendpoint_cb, ns_unbind_cb);\l"];
|
||||
m_rpmsg_send [label="rpmsg_send(ept,data)"];
|
||||
m_rpmsg_rx_cb [label="rpmsg_rx_callback()"];
|
||||
m_ep_cb [label="endpoint_cb(ept, data, size, src_addr)"];
|
||||
m_rpmsg_destroy_ep [label="rpmsg_destroy_endpoint(ept)"];
|
||||
|
||||
m_remoteproc_init -> m_remoteproc_load -> m_remoteproc_boot -> m_remoteproc_get_vdev ->
|
||||
m_rpmsg_shmpool_init -> m_rpmsg_vdev_init -> m_rpmsg_ns_cb -> m_rpmsg_create_ep -> m_rpmsg_send;
|
||||
m_rpmsg_send -> m_rpmsg_rx_cb -> m_ep_cb ->
|
||||
m_rpmsg_destroy_ep [dir="none", style="dashed"];
|
||||
}
|
||||
|
||||
subgraph s_flow_nodes {
|
||||
rank="same";
|
||||
node [shape="box"];
|
||||
s_remoteproc_init [label="rproc = remoteproc_init(&remoteproc_ops, &arg);"];
|
||||
|
||||
s_remoteproc_parse_rsc [label="ret = remoteproc_set_rsc_table(rproc, &rsc_table, rsc_size)"];
|
||||
s_remoteproc_get_vdev [label="vdev=remoteproc_create_virtio(rproc, rpmsg_vdev_id, SLAVE, rst_cb);"];
|
||||
s_rpmsg_vdev_init [label="rpdev=rpmsg_init_vdev(rpvdev, vdev, ns_bind_cb, &shm_io, NULL);"];
|
||||
s_rpmsg_create_ep [label="\lept=rpmsg_create_ept(ept, rdev, ept_name, ept_addr, dest_addr, \lendpoint_cb, ns_unbind_cb);\l"];
|
||||
s_rpmsg_ns_cb [label="\lrpmsg_ns_callback() will see if there is a local ep registered.\lIf yes, bind the ep; otherwise, call ns_binc_cb.\l"];
|
||||
s_rpmsg_send [label="rpmsg_send(ept,data)"];
|
||||
s_rpmsg_rx_cb [label="rpmsg_rx_callback()"];
|
||||
s_ep_cb [label="endpoint_cb(ept, data, size, src_addr)"];
|
||||
s_rpmsg_ns_unbind_cb [label="\lrpmsg_ns_callback() will call the previous\lregistered endpoint unbind callback\l"];
|
||||
|
||||
s_remoteproc_init -> s_remoteproc_parse_rsc -> s_remoteproc_get_vdev ->
|
||||
s_rpmsg_vdev_init -> s_rpmsg_create_ep;
|
||||
s_rpmsg_create_ep -> s_rpmsg_ns_cb -> s_rpmsg_rx_cb ->
|
||||
s_ep_cb -> s_rpmsg_send -> s_rpmsg_ns_unbind_cb [dir="none", style="dash"];
|
||||
|
||||
}
|
||||
|
||||
subgraph s_comment_nodes {
|
||||
node [group=s_comment, shape="note", style="filled", fillcolor="yellow"];
|
||||
rank="same";
|
||||
s_rpmsg_vdev_init_comment [label="\l* If vdev supports name service, it will create name service endpoint;\l* It will not return until the master set status to DRIVER READY\l"];
|
||||
s_rpmsg_rx_cb_comment [label="\l* It will look for the endpoint which matches the destination address.\lIf the two endpoints hasn't binded yet,\lit will set the local endpoint's destination address with the source address in the message\l"];
|
||||
}
|
||||
|
||||
master -> m_remoteproc_init [dir="none"];
|
||||
slave -> s_remoteproc_init [dir="none"];
|
||||
s_rpmsg_create_ep -> m_rpmsg_ns_cb [label="NS annoucement"];
|
||||
m_rpmsg_create_ep -> s_rpmsg_ns_cb [label="NS annoucement"];
|
||||
m_rpmsg_send -> s_rpmsg_rx_cb [label="RPMsg data"];
|
||||
s_rpmsg_send -> m_rpmsg_rx_cb [label="RPMsg data"];
|
||||
m_rpmsg_destroy_ep -> s_rpmsg_ns_unbind_cb [label="Endpoint destroy NS"];
|
||||
|
||||
m_remoteproc_init_comment -> m_remoteproc_init [dir="none"];
|
||||
m_remoteproc_boot_comment -> m_remoteproc_boot [dir="none"];
|
||||
m_rpmsg_vdev_init_comment -> m_rpmsg_vdev_init [dir="none"];
|
||||
m_rpmsg_create_ep_comment -> m_rpmsg_create_ep [dir="none"];
|
||||
m_rpmsg_send_comment -> m_rpmsg_send [dir="none"];
|
||||
|
||||
s_rpmsg_vdev_init -> s_rpmsg_vdev_init_comment [dir="none"];
|
||||
s_rpmsg_rx_cb -> s_rpmsg_rx_cb_comment [dir="none"];
|
||||
|
||||
{rank=same; master; m_remoteproc_init}
|
||||
{rank=same; slave; s_remoteproc_init}
|
||||
|
||||
}
|
||||
|
||||
@@ -0,0 +1,87 @@
|
||||
// RPMsg static endpoints
|
||||
|
||||
digraph G {
|
||||
rankdir="LR";
|
||||
|
||||
subgraph roles {
|
||||
node [style="filled", fillcolor="lightblue"];
|
||||
master [label="Master"];
|
||||
slave [label="Slave"];
|
||||
}
|
||||
|
||||
subgraph m_comment_nodes {
|
||||
node [group=m_comment, shape="note", style="filled", fillcolor="yellow"];
|
||||
rank="same";
|
||||
m_remoteproc_init_comment [label="this is initialize rproc call"];
|
||||
m_remoteproc_boot_comment [label="it will setup vdev before booting the remote"];
|
||||
m_rpmsg_vdev_init_comment [label="\l* It will initialize vrings with the shared memory\l* As vdev doesn't support name service, it will not create name service endpoint;\l* it sets vdev status to DRVIER_READY, And will notify remote.\l"];
|
||||
m_rpmsg_create_ep_comment [label="\lAs vdev doesn't supports name service,\lit will not send out name service.\l"];
|
||||
}
|
||||
|
||||
subgraph m_flow_nodes {
|
||||
node [shape="box"];
|
||||
rank="same";
|
||||
m_remoteproc_init [label="rproc = remoteproc_init(&remoteproc_ops, &arg);"];
|
||||
m_remoteproc_load [label="calls remoteproc_load() to load applicaiton"];
|
||||
m_remoteproc_boot [shape="box", label="ret=remoteproc_boot(&rproc)"];
|
||||
m_remoteproc_get_vdev [label="vdev=remoteproc_create_virtio(rproc, rpmsg_vdev_id, MASTER, NULL);"];
|
||||
m_rpmsg_shmpool_init[label="rpmsg_virtio_init_shm_pool(shpool, shbuf, shbuf_pool_size);"];
|
||||
m_rpmsg_vdev_init [label="rpdev=rpmsg_init_vdev(rpvdev, vdev, ns_bind_cb, &shm_io, shpool);"];
|
||||
m_rpmsg_create_ep [label="\lept=rpmsg_create_ept(ept, rdev, ept_name, ept_addr, dest_addr, \lendpoint_cb, ns_unbind_cb);\l"];
|
||||
m_rpmsg_send [label="rpmsg_send(ept,data)"];
|
||||
m_rpmsg_rx_cb [label="rpmsg_rx_callback()"];
|
||||
m_ep_cb [label="endpoint_cb(ept, data, size, src_addr)"];
|
||||
m_rpmsg_destroy_ep [label="rpmsg_destroy_endpoint(ept)"];
|
||||
|
||||
m_remoteproc_init -> m_remoteproc_load -> m_remoteproc_boot -> m_remoteproc_get_vdev ->
|
||||
m_rpmsg_shmpool_init -> m_rpmsg_vdev_init -> m_rpmsg_create_ep -> m_rpmsg_send;
|
||||
m_rpmsg_send -> m_rpmsg_rx_cb -> m_ep_cb ->
|
||||
m_rpmsg_destroy_ep [dir="none", style="dashed"];
|
||||
}
|
||||
|
||||
subgraph s_flow_nodes {
|
||||
rank="same";
|
||||
node [shape="box"];
|
||||
s_remoteproc_init [label="rproc = remoteproc_init(&remoteproc_ops, &arg);"];
|
||||
|
||||
s_remoteproc_parse_rsc [label="ret = remoteproc_set_rsc_table(rproc, &rsc_table, rsc_size)"];
|
||||
s_remoteproc_get_vdev [label="vdev=remoteproc_create_virtio(rproc, rpmsg_vdev_id, SLAVE, rst_cb);"];
|
||||
s_rpmsg_vdev_init [label="rpdev=rpmsg_init_vdev(rpvdev, vdev, ns_bind_cb, &shm_io, NULL);"];
|
||||
s_rpmsg_create_ep [label="\lept=rpmsg_create_ept(ept, rdev, ept_name, ept_addr, dest_addr, \lendpoint_cb, ns_unbind_cb);\l"];
|
||||
s_rpmsg_send [label="rpmsg_send(ept,data)"];
|
||||
s_rpmsg_rx_cb [label="rpmsg_rx_callback()"];
|
||||
s_ep_cb [label="endpoint_cb(ept, data, size, src_addr)"];
|
||||
s_rpmsg_destroy_ep [label="rpmsg_destroy_endpoint(ept)"];
|
||||
|
||||
s_remoteproc_init -> s_remoteproc_parse_rsc -> s_remoteproc_get_vdev ->
|
||||
s_rpmsg_vdev_init -> s_rpmsg_create_ep;
|
||||
s_rpmsg_create_ep -> s_rpmsg_rx_cb ->
|
||||
s_ep_cb -> s_rpmsg_send -> s_rpmsg_destroy_ep [dir="none", style="dash"];
|
||||
|
||||
}
|
||||
|
||||
subgraph s_comment_nodes {
|
||||
node [group=s_comment, shape="note", style="filled", fillcolor="yellow"];
|
||||
rank="same";
|
||||
s_rpmsg_vdev_init_comment [label="\l* As vdev doesn't support name service, it will not create name service endpoint;\l* It will not return until the master set status to DRIVER READY\l"];
|
||||
s_rpmsg_rx_cb_comment [label="\l* It will look for the endpoint which matches the destination address.\lIf no endpoint has found, it will drop the message.\l"];
|
||||
}
|
||||
|
||||
master -> m_remoteproc_init [dir="none"];
|
||||
slave -> s_remoteproc_init [dir="none"];
|
||||
m_rpmsg_send -> s_rpmsg_rx_cb [label="RPMsg data"];
|
||||
s_rpmsg_send -> m_rpmsg_rx_cb [label="RPMsg data"];
|
||||
|
||||
m_remoteproc_init_comment -> m_remoteproc_init [dir="none"];
|
||||
m_remoteproc_boot_comment -> m_remoteproc_boot [dir="none"];
|
||||
m_rpmsg_vdev_init_comment -> m_rpmsg_vdev_init [dir="none"];
|
||||
m_rpmsg_create_ep_comment -> m_rpmsg_create_ep [dir="none"];
|
||||
|
||||
s_rpmsg_vdev_init -> s_rpmsg_vdev_init_comment [dir="none"];
|
||||
s_rpmsg_rx_cb -> s_rpmsg_rx_cb_comment [dir="none"];
|
||||
|
||||
{rank=same; master; m_remoteproc_init}
|
||||
{rank=same; slave; s_remoteproc_init}
|
||||
|
||||
}
|
||||
|
||||
56
source/OpenAMP/open-amp/docs/img-src/gen-graph.py
Normal file
56
source/OpenAMP/open-amp/docs/img-src/gen-graph.py
Normal file
@@ -0,0 +1,56 @@
|
||||
from graphviz import Digraph
|
||||
import argparse
|
||||
import os
|
||||
import pydot
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
def gen_graph_from_gv(ifile, odir, oformat="png"):
|
||||
(graph,) = pydot.graph_from_dot_file(ifile)
|
||||
gen_graph_func = getattr(graph, "write_" + oformat)
|
||||
filename = os.path.basename(ifile)
|
||||
ofile = odir + "/" + os.path.splitext(filename)[0] + "." + oformat
|
||||
gen_graph_func(ofile)
|
||||
|
||||
parser = argparse.ArgumentParser(description='Process some integers.')
|
||||
parser.add_argument('-i', "--infile", action="append",
|
||||
help="graphviz file path")
|
||||
parser.add_argument('-o', '--outdir',
|
||||
help='sum the integers (default: find the max)')
|
||||
parser.add_argument('-f', '--outformat', default="png",
|
||||
help='output image format (default: png)')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Image source directory
|
||||
img_src_dir = os.path.dirname(os.path.realpath(sys.argv[0]))
|
||||
|
||||
img_files = []
|
||||
if args.infile:
|
||||
for f in args.infile:
|
||||
if not os.path.isfile(f):
|
||||
f = img_src_dir + "/" + f
|
||||
if not os.path.isfile(f):
|
||||
warnings.warn("Input file: " + f + " doesn't exist.")
|
||||
else:
|
||||
img_files.append(f)
|
||||
else:
|
||||
for f in os.listdir(img_src_dir):
|
||||
if f.endswith(".gv"):
|
||||
img_files.append(img_src_dir + "/" + f)
|
||||
|
||||
if not img_files:
|
||||
sys.exist("ERROR: no found image files.")
|
||||
|
||||
oformat = args.outformat
|
||||
|
||||
if args.outdir:
|
||||
odir = args.outdir
|
||||
if not os.path.isdir(odir):
|
||||
sys.exit("--outdir " + odir + "doesn't exist")
|
||||
else:
|
||||
odir = os.path.dirname(img_src_dir) + "/img"
|
||||
|
||||
for f in img_files:
|
||||
print("Generating " + oformat + " for " + f + " ...")
|
||||
gen_graph_from_gv(f, odir, oformat)
|
||||
@@ -0,0 +1,19 @@
|
||||
// Remoteproc Life Cycle Management State Machine
|
||||
|
||||
digraph G {
|
||||
rankdir="LR"
|
||||
st_offline [label="Offline"]
|
||||
st_configured [label="Configured"]
|
||||
st_ready [label="Ready"]
|
||||
st_running [label="Running"]
|
||||
st_stopped [label="Stopped"]
|
||||
|
||||
st_offline -> st_configured
|
||||
st_configured -> st_ready
|
||||
st_ready -> st_running
|
||||
st_ready -> st_stopped
|
||||
st_stopped -> st_offline
|
||||
st_running -> st_stopped
|
||||
|
||||
{rank=same; st_configured; st_ready; st_running}
|
||||
}
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 194 KiB |
BIN
source/OpenAMP/open-amp/docs/img/coprocessor-rpmsg-ns.png
Normal file
BIN
source/OpenAMP/open-amp/docs/img/coprocessor-rpmsg-ns.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 171 KiB |
BIN
source/OpenAMP/open-amp/docs/img/coprocessor-rpmsg-static-ep.png
Normal file
BIN
source/OpenAMP/open-amp/docs/img/coprocessor-rpmsg-static-ep.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 154 KiB |
BIN
source/OpenAMP/open-amp/docs/img/rproc-lcm-state-machine.png
Normal file
BIN
source/OpenAMP/open-amp/docs/img/rproc-lcm-state-machine.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 25 KiB |
BIN
source/OpenAMP/open-amp/docs/openamp_ref.pdf
Normal file
BIN
source/OpenAMP/open-amp/docs/openamp_ref.pdf
Normal file
Binary file not shown.
103
source/OpenAMP/open-amp/docs/remoteproc-design.md
Normal file
103
source/OpenAMP/open-amp/docs/remoteproc-design.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# Remoteproc Design Document
|
||||
Remoteproc provides abstraction to manage the life cycle of a remote
|
||||
application. For now, it only provides APIs on bringing up and
|
||||
tearing down the remote application, and parsing resource table.
|
||||
It will extend to crash detection, suspend and resume.
|
||||
|
||||
## Remoteproc LCM States
|
||||
| State | State Description |
|
||||
|:------|:------------------|
|
||||
| Offline | Initial state of a remoteproc instance. The remote presented by the remoteproc instance and its resource has been powered off. |
|
||||
| Configured | The remote presented by the remoteproc instance has been configured. And ready to load applicaiton. |
|
||||
| Ready | The remote presented by the remoteproc instance has applicaiton loaded, and ready to run. |
|
||||
| Stopped | The remote presented by the remoteproc instance has stopped from running. But the remote is still powered on. And the remote's resource hasn't been released. |
|
||||
|
||||

|
||||
|
||||
### State Transition
|
||||
| State Transition | Transition Trigger |
|
||||
|:-----------------|:-------------------|
|
||||
| Offline -> Configured | Configure the remote to make it able to load application;<br>`remoteproc_configure(&rproc, &config_data)`|
|
||||
| Configured -> Ready | load firmware ;<br>`remoteproc_load(&rproc, &path, &image_store, &image_store_ops, &image_info)` |
|
||||
| Ready -> Running | start the processor; <br>`remoteproc_start(&rproc)` |
|
||||
| Ready -> Stopped | stop the processor; <br>`remoteproc_stop(&rproc)`; <br>`remoteproc_shutdown(&rproc)`(Stopped is the intermediate state of shutdown operation) |
|
||||
| Running -> Stopped | stop the processor; <br>`remoteproc_stop(&rproc)`; <br>`remoteproc_shutdown(&rproc)` |
|
||||
| Stopped -> Offline | shutdown the processor; <br>`remoteproc_shutdown(&rproc)` |
|
||||
|
||||
## Remote User APIs
|
||||
* Initialize remoteproc instance:
|
||||
```
|
||||
struct remoteproc *remoteproc_init(struct remoteproc *rproc,
|
||||
struct remoteproc_ops *ops, void *priv)
|
||||
```
|
||||
* Release remoteproc instance:
|
||||
```
|
||||
int remoteproc_remove(struct remoteproc *rproc)
|
||||
```
|
||||
* Add memory to remoteproc:
|
||||
```
|
||||
void remoteproc_add_mem(struct remoteproc *rproc, struct remoteproc_mem *mem)
|
||||
```
|
||||
* Get memory libmetal I/O region from remoteproc specifying memory name:
|
||||
```
|
||||
struct metal_io_region *remoteproc_get_io_with_name(struct remoteproc *rproc, const char *name)
|
||||
```
|
||||
* Get memory libmetal I/O region from remoteproc specifying physical address:
|
||||
```
|
||||
struct metal_io_region *remoteproc_get_io_with_pa(struct remoteproc *rproc, metal_phys_addr_t pa);
|
||||
```
|
||||
* Get memory libmetal I/O region from remoteproc specifying virtual address:
|
||||
```
|
||||
struct metal_io_region *remoteproc_get_io_with_va(struct remoteproc *rproc, void *va);
|
||||
```
|
||||
* Map memory and add the memory to the remoteproc instance:
|
||||
```
|
||||
void *remoteproc_mmap(struct remoteproc *rproc,
|
||||
metal_phys_addr_t *pa, metal_phys_addr_t *da,
|
||||
size_t size, unsigned int attribute,
|
||||
struct metal_io_region **io);
|
||||
```
|
||||
* Set resource table to remoteproc:
|
||||
```
|
||||
int remoteproc_set_rsc_table(struct remoteproc *rproc,
|
||||
struct resource_table *rsc_table,
|
||||
size_t rsc_size)
|
||||
```
|
||||
* Configure the remote presented by the remoteproc instance to make it able
|
||||
to load applicaiton:
|
||||
```
|
||||
int remoteproc_config(struct remoteproc *rproc, void *data)
|
||||
```
|
||||
* Load application to the remote presented by the remoteproc instance to make
|
||||
it ready to run:
|
||||
```
|
||||
int remoteproc_load(struct remoteproc *rproc, const char *path,
|
||||
void *store, struct image_store_ops *store_ops,
|
||||
void **img_info)
|
||||
```
|
||||
* Run application on the remote presented by the remoteproc instance:
|
||||
```
|
||||
int remoteproc_start(struct remoteproc *rproc)
|
||||
```
|
||||
* Stop application on remote presented by the remoteproc instance:
|
||||
```
|
||||
int remoteproc_stop(struct remoteproc *rproc)
|
||||
```
|
||||
* Shutdown the remote presented by the remoteproc instance:
|
||||
```
|
||||
int remoteproc_shutdown(struct remoteproc *rproc)
|
||||
```
|
||||
* Create virtio device from the resource table vdev resource, and add it to the
|
||||
remoteproc instance:
|
||||
```
|
||||
struct virtio_device *remoteproc_create_virtio(struct remoteproc *rproc,
|
||||
int vdev_id, unsigned int role,
|
||||
void (*rst_cb)(struct virtio_device *vdev))
|
||||
```
|
||||
* Remove virtio device from the remoteproc instance:
|
||||
```
|
||||
void remoteproc_remove_virtio(struct remoteproc *rproc,
|
||||
struct virtio_device *vdev)
|
||||
```
|
||||
|
||||
|
||||
109
source/OpenAMP/open-amp/docs/rpmsg-design.md
Normal file
109
source/OpenAMP/open-amp/docs/rpmsg-design.md
Normal file
@@ -0,0 +1,109 @@
|
||||
# RPMsg Design Document
|
||||
RPMsg is a framework to allow communication between two processors.
|
||||
RPMsg implementation in OpenAMP library is based on virtio. It complies
|
||||
the RPMsg Linux kernel implementation. It defines the handshaking on
|
||||
setting up and tearing down the communication between applicaitons
|
||||
running on two processors.
|
||||
|
||||
## RPMsg User API Flow Chats
|
||||
### RPMsg Static Endpoint
|
||||

|
||||
### Binding Endpoint Dynamically with Name Service
|
||||

|
||||
### Creating Endpoint Dynamically with Name Service
|
||||

|
||||
|
||||
## RPMsg User APIs
|
||||
* RPMsg virtio master to initialize the shared buffers pool(RPMsg virtio slave
|
||||
doesn't need to use this API):
|
||||
```
|
||||
void rpmsg_virtio_init_shm_pool(struct rpmsg_virtio_shm_pool *shpool,
|
||||
void *shbuf, size_t size)
|
||||
```
|
||||
* Initialize RPMsg virtio device:
|
||||
```
|
||||
int rpmsg_init_vdev(struct rpmsg_virtio_device *rvdev,
|
||||
struct virtio_device *vdev,
|
||||
rpmsg_ns_bind_cb ns_bind_cb,
|
||||
struct metal_io_region *shm_io,
|
||||
struct rpmsg_virtio_shm_pool *shpool)
|
||||
```
|
||||
* Deinitialize RPMsg virtio device:
|
||||
```
|
||||
void rpmsg_deinit_vdev(struct rpmsg_virtio_device *rvdev)`
|
||||
```
|
||||
* Get RPMsg device from RPMsg virtio device:
|
||||
```
|
||||
struct rpmsg_device *rpmsg_virtio_get_rpmsg_device(struct rpmsg_virtio_device *rvdev)
|
||||
```
|
||||
* Create RPMsg endpoint:
|
||||
```
|
||||
int rpmsg_create_ept(struct rpmsg_endpoint *ept,
|
||||
struct rpmsg_device *rdev,
|
||||
const char *name, uint32_t src, uint32_t dest,
|
||||
rpmsg_ept_cb cb, rpmsg_ns_unbind_cb ns_unbind_cb)
|
||||
```
|
||||
* Destroy RPMsg endpoint:
|
||||
```
|
||||
void rpmsg_destroy_ept(struct rpsmg_endpoint *ept)
|
||||
```
|
||||
* Check if the local RPMsg endpoint is binded to the remote, and ready to send
|
||||
message:
|
||||
```
|
||||
int is_rpmsg_ept_ready(struct rpmsg_endpoint *ept)
|
||||
```
|
||||
* Send message with RPMsg endpoint default binding:
|
||||
```
|
||||
int rpmsg_send(struct rpmsg_endpoint *ept, const void *data, int len)
|
||||
```
|
||||
* Send message with RPMsg endpoint, specify destination address:
|
||||
```
|
||||
int rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len,
|
||||
uint32_t dst)
|
||||
```
|
||||
* Send message with RPMsg endpoint using explicit source and destination
|
||||
addresses:
|
||||
```
|
||||
int rpmsg_send_offchannel(struct rpmsg_endpoint *ept,
|
||||
uint32_t src, uint32_t dst,
|
||||
const void *data, int len)
|
||||
```
|
||||
* Try to send message with RPMsg endpoint default binding, if no buffer
|
||||
available, returns:
|
||||
```
|
||||
int rpmsg_trysend(struct rpmsg_endpoint *ept, const void *data,
|
||||
int len)
|
||||
```
|
||||
* Try to send message with RPMsg endpoint, specify destination address,
|
||||
if no buffer available, returns:
|
||||
```
|
||||
int rpmsg_trysendto(struct rpmsg_endpoint *ept, void *data, int len,
|
||||
uint32_t dst)
|
||||
```
|
||||
* Try to send message with RPMsg endpoint using explicit source and destination
|
||||
addresses, if no buffer available, returns:
|
||||
```
|
||||
int rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept,
|
||||
uint32_t src, uint32_t dst,
|
||||
const void *data, int len)`
|
||||
```
|
||||
## RPMsg User Defined Callbacks
|
||||
* RPMsg endpoint message received callback:
|
||||
```
|
||||
int (*rpmsg_ept_cb)(struct rpmsg_endpoint *ept, void *data,
|
||||
size_t len, uint32_t src, void *priv)
|
||||
```
|
||||
* RPMsg name service binding callback. If user defines such callback, when
|
||||
there is a name service announcement arrives, if there is no registered
|
||||
endpoint found to bind to this name service, it will call this callback.
|
||||
If this callback is not defined, it will drop this name service.:
|
||||
```
|
||||
void (*rpmsg_ns_bind_cb)(struct rpmsg_device *rdev,
|
||||
const char *name, uint32_t dest)
|
||||
```
|
||||
* RPMsg endpoint name service unbind callback. If user defines such callback,
|
||||
when there is name service destroy arrives, it will call this callback to
|
||||
notify the user application about the remote has destroyed the service.:
|
||||
```
|
||||
void (*rpmsg_ns_unbind_cb)(struct rpmsg_endpoint *ept)
|
||||
```
|
||||
Reference in New Issue
Block a user