The development of the Policy Cloud aims to support policy makers, public authorities and other stakeholders in preparing effective policy models, by evaluating and analysing different dataset gathered from multiple reliable sources.
In order to guarantee that the Policy Cloud complexity is properly handled and analysed, the State of the Art analysis and the Requirement Analysis will be performed throughout the entire duration of the project, from 2020 to 2023.
In particular, the State of the Art will investigate the technologies used by the platform. The Requirement Analysis, instead, will make sure that measurable and well defined specifications are established. Both the State of the Art and the Requirement Analysis will allow properly designing the user, business and system requirements for the Policy Cloud architecture.
For the Requirement Analysis, both use cases and technical needs will be considered.
The use case requirements reflect the stakeholders' necessities and are formalized into an output called Stakeholder Requirements Specification (StRS). The technical requirements transform those needs into a product and are formalized into the System Requirements Specification (SyRS) and the Software Requirements Specification (SRS).
The use case requirement scenarios cover four different topics:
Different actors are involved in the development, deployment and usage of the solutions offered by the Policy Cloud platform. They are Data Owners, Data Engineers, Policy Makers and Data Scientists. They make sure the software has specific requirements that make it reliable. In particular, the cloud should properly relate to other software and guarantee high performances.
State of the Art
In addition, since the aim of the project is to develop measurable and well defined specifications, the main state of the art technologies will be linked to the context of the Policy Cloud project and will show how the platform can benefit from the use of those technologies.
The Evidence Based Policy Making (EBPM) is going to be used to ensure that the policy choices are executed objectively using a scientific approach instead of techniques based on intuitions, ideology or just theory.
Then, the visualization charts, both static and dynamic, will then be used to help to broaden the knowledge in a more direct and structured way. In this way, the tool becomes more user-friendly and understandable.
It is also crucial to validate efficient data fusion for Policy Cloud scenarios. Two aspects that will be considered are the scalability of massive data from multiple sources and the capacity to apply analytics in order to store only the valuable data and not the entire bulk. For the first aspect, the attention will be put on fixing problematic data, improving data reliability and completeness. For the second aspect, instead, different tools will be taken into account like Apache Spark Streaming22 which helps classify the data during the live stream processing. Others, like KSQL23 or Confluent KSQL24 supply a SQL interface that ensures different streaming operations like data filtering, transformation or aggregation.
The Policy Cloud will be designed as an Infrastructure as Service (Iaas) because it lets customers access the software and hardware needed to run computing operations (e.g. storage, data processing).
If you would like to learn more, download the full report.