Netgrif application engine, a part of the Netgrif platform, is built to interpret programs of the language Petriflow. Petriflow is a high-level programming language for process-driven application development. Petriflow follows the programming paradigm called process-driven programming (PDP). A comparison of Petriflow’s concept with other well-known programming paradigms is essential to understand the meaning of the PDP paradigm. In this article, we discuss how process-driven programming extends and combines advantages of object-oriented programming (OOP), business process modelling (BPM), event-driven programming (EDP) and relational databases (RDB).
OOP, BPM and EDP
Object-oriented programming languages brought a distinguishing feature – encapsulation. This was a key step for the programming world that needed new languages to solve the problem of the growing complexity of programmes. OOP was a concept, which got rid of the problems that developers had with procedural and imperative programming. The concept of classes that contain data and methods strongly supports the modularity of applications.
While binding methods with the data in classes was one of the main features of OOP that helped to create more modular programmes, PDP adds processes to classes to describe a life-cycle of objects of a class. In this way, by adding processes that define when methods of a class can be called and who can call the methods, applications can be programmed easier, faster and therefore cheaper.
The main building blocks of object-oriented programmes are classes and their objects. In comparison, the main building blocks of process-driven programmes in Petriflow language are processes and their instances called cases. A class is a blueprint of an object and a Petriflow process is a blueprint of a Petriflow process instance. Simply, a Petriflow process is a class enriched by a workflow process that defines a life-cycle of the objects of that class. More accurately, a Petriflow process consists of data, tasks and actions, roles, and a workflow process.
In the same way as in classes in OOP, data variables in Petriflow processes represent all attributes of a Petriflow process instance. The change of the value of a data variable can be triggered by a so-called set-event. Reading a value of a data variable can be triggered by a so-called get-event.
Tasks are the active parts of the Petriflow processes. Data variables can be associated with workflow tasks to define data fields and create task forms. A data field, which is an association of a data variable to a task, is given as a rich relation, that states:
- whether a get-event and/or a set-event can trigger the data variable, i.e. whether its value is readable and/or editable,
- whether the value of data variable is required,
- what are valid values of the data variable within the data field.
Tasks have a simple life cycle: a task can be enabled, disabled or executed. Change of a state of a task can be triggered as follows:
- if a task is enabled, its change to the state executed can be triggered by a so-called assign-event
- if a task is enabled, its readable data fields are accessible for reading by get-events
- if a task is executed, its readable data fields are accessible for reading by get-events
- if a task is executed, its editable data fields can be changed to valid values by set-events
- if a task is executed and all its required data fields have valid values, its change to the state enabled or disabled can be triggered by a so-called finish-event.
Using the principles of event-driven programming, each data variable and each task has associated an event listener: whenever an event triggers a change of the value of a data variable or whenever an event triggers a change of the state of a task, then a reaction can be defined by pieces of code called actions in the event listener. Whenever an event is occurring, the actions in its event listener are executed. In actions, as a part of the code, events for tasks and events for data variables can be emitted. In this way, events and their reactions can create chains.
Roles or lists of users can be associated with events of tasks, defining for each task which users are authorized to emit events on that task. Similarly to data fields, an association of users with events is a rich relation. For example, a user authorized to emit an assign-event of a task can emit the assign-event. By emitting the assign-event, this user has to choose one of the users that are authorized to possibly emit the finish-event of this task and only this user is then authorized to emit set-events of editable data fields of this task and to emit the finish-event of this task. In other words, by emitting an assign-event, the authorized user is assigning that task to a user (possibly himself), that is authorised to perform the task, i.e. to fill editable data fields and finish the task.
As a workflow process, Petriflow language uses place/transition Petri nets enriched by reset arcs, inhibitor arcs and read arcs to define the life cycle of the Petriflow process. Places of the Petri net represent the control variables. Transitions of Petri net represent tasks of the workflow process. A task is enabled, whenever the corresponding transition in the underlying Petri net is enabled. Assign-event occurring on this task consumes tokens from the input places of the corresponding transition and moves the state of the task to be executed. Finish-event on the task being executed produces tokens to the output places of the corresponding transition. In this way, the workflow process defines when a task is enabled, executed or disabled. The Life-cycle of the Petriflow process is given as flow of assign/get/set/finish events on tasks and data variables respecting the restrictions on events given by the underlying Petri net.
When compared with relational databases, Petriflow processes correspond to tables, while instances (cases) of the Petriflow processes correspond to single records (rows) of a table. In a similar way to foreign keys in RDB and in a similar way to attributes of objects containing references to other objects in OOP, data variables of Petriflow processes can store the references to instances of Petriflow processes and references to the task and list of tasks of Petriflow process instances. In this way, one can easily share forms associated with one task as subforms within other tasks and implement a single source of truth architecture.
Process-driven programming using Petriflow language employs principles of object-oriented programming, relational databases and event-driven programming combining them with user authorization and the concept of life-cycle of objects using workflow processes borrowed from business process management. All this should bring a higher level of programming, with as low code as possible and as much code as necessary with the aim to make the development of complex applications more structured, closer to the business user, faster, without the necessity to deal with the implementation details of the middleware. Netgrif application builder (NAB) is then a platform, where you can develop your process-driven programs with drag and drop no coding when possible and with coding when needed. NAB produces processes in Petriflow code – a combination of XML and Groovy. This code can be interpreted in NAE – Petriflow interpreter written in Java using framework Spring Boot and storing the data of Petriflow process instances in MongoDB. Similarly to SQL build over relational databases, Petriflow language also provides a powerful query language that enables to create filters over process instances and their tasks.