One of our goals in 2021 is to bring Netgrif Application Engine into the cloud and deliver for our customers the best and fastest way of creating process-driven applications. This was not easy with the original way of communication, which used our backend and frontend to send data to each other.
What do we want to achieve? In Netgrif Cloud there will be multiple applications implemented and deployed according to microservice architecture under given namespaces that should be able to communicate with each other. A single process-driven application in this cloud space will not hold information about other applications, it will do only what it was implemented for: executing processes. However, we want to provide the possibility of calling events from one application in the other applications, searching for objects in other applications, getting and setting data in other applications. And all of this by the runtime.
An example: there are two applications (A and B) running in the cloud. Application A has an action to assign a task to a specific user in application B. But application A does not have any information about application B’s address or URL, as it can be anywhere in the world.
How does the original communication look like? The original communication in NAE is solved using REST API. If we want to solve the problem mentioned earlier, we must implement a REST endpoint in application B, and provide its properties (URL, port, request body, response structure etc.) for application A to be able to call the specific endpoint. But what if there will be a lot more applications in the cloud and we want to enable all of them to call actions from any of them? We must implement all the endpoints for all the applications with specific URLs, and somehow register all the endpoints into existing applications. This would require a lot of work, however, it is not impossible to do it dynamically. But even if we would do this, the REST would be very slow in case of retrieving big data, and maybe REST is too complex for very simple events.
How does our solution look? We decided to use a combination of fast and modern protocols: message brokers for simple events and gRPC protocol for search and setting/getting data. We are using RabbitMQ’s integration with Spring Cloud for handling task and case events, as assign, cancel, finish and create. These events do not require a complex request body and response body, so RabbitMQ’s fast message broker system will do the job. RabbitMQ works based on the publisher/subscriber principle, where application A sends a message to a queue, to which application B is subscribed, and if a message is available in the queue, the application will process it. If there is a new application deployed to the Netgrif cloud, it can register to that message broker, as it is a standalone application in the microservice architecture of Netgrif Cloud.
For more complicated requests, like the search of tasks and cases, or setting and getting data, where the request or result could be a robust object or list of objects, we chose the gRPC protocol. It is basically a remote function call, where the client calls a function of the server and then processes its result. However, this will need a URL address too (as this was a problem in the case of the original REST-like solution), but we decided to implement a new application, that will be called Process Resolver. This will be part of the microservice architecture too. If we want to import a process, this application will take care of registering some basic information of processes. And when an action will be called using the gRPC system, the Process resolver will return the address of the required process.