Snap4City platform can be installed in several different configurations exploiting the same modules. The different configurations may be suitable from small cities to large cities with a high number of users and huge volume of data. The different modules can be provided as VM (Virtual Machines) Appliances or Dockers, so that two different kinds of installations are possible, as well as a mixt of them. In the Tutorial Course 2020 part 6 they are reported. https://www.snap4city.org/download/video/course2020/sys/ The different Snap4City configurations and each component mentioned in the following are described in detail with the instructions for their installation at page: https://www.snap4city.org/471 Please note that each VM is an appliance with a number of components which can be updated directly from GITHUB/DISIT. https://github.com/disit
According to the MDA requirements, we propose to adopt the configuration called DataCity-Large which is reported in the following figure, and it is mentioned quite similarly into Tutorial Part 6 https://www.snap4city.org/577
DCL: DataCity-Large configuration includes installations of:
- Snap4CityMAIN VM/Docker compose with: Dashboard Builder, Wizard, library of Widget, WebSocket Servers, Data Inspector, User Statistics, External Service manager, Resource Manager, Micro Applications manager, LDAP/KeyCloak for Authorization/Authentication, Kafka (which has to be scaled up if the traffic on synoptic is high), MyKPI, MyPOI, Synoptic and Custom Widget/PINs, and IoT Directory;
- KBSSM VM: ServiceMap, Smart City API, Knowledge Base Km4City (LOG, Flint, Virtuoso): which can be duplicated in federated configurations to increase robustness/volume of traffic or in FT;
- Heatmap Server VM with an heatmap manager to generate the GeoTIFF which are loaded into a GIS for their distribution via WFM protocol such as ArcGIS or GeoServer; which can be started in FT;
- Living Lab Support VM with Drupal 7 with Snap4City extensions; which can be started in FT;
- MCLSCount VM which is cluster of VM to manage Docker Containers with Marathon, Mesos and DISCES-EM. They are containers for IoT Apps, DataAnalytics, Web Scraping; This cluster has 3 VM for the masters and a number of VM for the Dockers/containers. It may have sense to have a true cluster if the Dockers are more than 200 Containers, otherwise it may be enough to have a static allocation of them in a few VMs in FT. The Snap4City solution has been tested on thousands of Dockers/Containers and the larger cluster of VM consists presently of 16 VMs for Dockers elastically managed. [Scalable2018], [DataFlow2019].
- Typically, we have an average of 70 Containers for each VM depending on the image size of the container, and VM size. The Dockers for Data Analytics cost about 4 times bigger in terms of CPU and Memory with respect to those for IoT Apps, and have a more sporadic usage of the resources. Their timing is more relaxed.
- IOTDSES a cluster of federated NIFI on Dockers and their balancer, they are allocated on VM as well;
- ESSTORE VM a cluster of Elastic Search and Kibana VMs. The number of VM depends on the size of the global storage which can increase elastically with the need along the operation. In alternative the usage of a shared common SAN and docker could be viable if it is fast enough:
- IOTOBSF VM: IoT Orion Brokers (V1/V2) also on Dockers or VM, internal or external, etc.;
- Rstudio Server VM: for development of R Studio processes; Their size depends on the number of developers you have at the same time working on them;
- Python Server VM: for development of Python processes; Their size depends on the number of developers you have at the same time working on them;
- CKAN DataGate VM: for the connection to CKAN open data network;
- OpenMAINT VM: including OpenMAINT and BIM, for the development of Workflow, BPM and BIM;