iconOpen Access

ARTICLE

Spatio-Temporal Earthquake Analysis via Data Warehousing for Big Data-Driven Decision Systems

Georgia Garani1,*, George Pramantiotis2, Francisco Javier Moreno Arboleda3

1 Department of Digital Systems, School of Technology, Gaiopolis, University of Thessaly, Larisa, 41500, Greece
2 School of Science and Technology, Hellenic Open University, Patras, 26335, Greece
3 Departamento de Ciencias de la Computación y de la Decisión, Universidad Nacional de Colombia Sede Medellín, Medellín, 050021, Colombia

* Corresponding Author: Georgia Garani. Email: email

(This article belongs to the Special Issue: Big Data-Driven Intelligent Decision Systems)

Computers, Materials & Continua 2026, 86(3), 85 https://doi.org/10.32604/cmc.2025.071509

Abstract

Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation. Modern seismological research produces vast volumes of heterogeneous data from seismic networks, satellite observations, and geospatial repositories, creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making. Data warehousing technologies provide a robust foundation for this purpose; however, existing earthquake-oriented data warehouses remain limited, often relying on simplified schemas, domain-specific analytics, or cataloguing efforts. This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity. The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables. A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance, while the bridge-table schema remains advantageous for dimension-centric queries. To reconcile these trade-offs, a hybrid schema is proposed that retains both representations, ensuring balanced efficiency across heterogeneous workloads. The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity, improve query performance, and support multidimensional visualization. In doing so, it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience, risk mitigation, and emergency management.

Keywords

Data warehouse; data analysis; big data; decision systems; seismology; data visualization

1  Introduction

Natural phenomena such as cyclones, earthquakes, floods, hurricanes, and volcanic eruptions, unfortunately, occur relatively often on our planet and can be disastrous. These phenomena can appear with or without warning and can sometimes cause economic and material damages, environmental disasters, injuries, and loss of human life. Natural phenomena and the disasters caused by them are negatively enhanced by factors and human activities and choices that affect these phenomena and their evolution, such as climate change, illegal mining, and reckless construction. As science and technology advance, the study, prediction, and treatment of these phenomena become even more achievable, with the ultimate goal of taking measures before their appearance to minimize their effects [1].

Earthquakes are characterized by seismic waves or subsoil vibrations, caused by disturbances in the mechanical balance of the Earth’s rocky interior. The seismic activity of a geographic area depends on the frequency, magnitude, and type of earthquakes occurring in a certain period. Millions of people around the world are affected seriously by both earthquakes and the probability of earthquake occurrence [2]. Hence, many countries, in their effort to drastically deal with this problem, have instituted anti-earthquake regulations and laws for the construction of buildings, and established evacuation protocols in the event of an earthquake. Reporting and systematic monitoring of earthquakes can help scientists to more accurately predict and make decisions about them.

The study of earthquakes can be supported and assisted significantly by data warehousing. Data warehousing is the technology of creating, maintaining, and using data warehouses (DWs). This process gathers data from heterogeneous sources, cleans and transforms them into a standardized format, loads them into DWs or data marts, which constitute subdivisions of DWs, and then analyzes them for data mining and decision-making support.

Since data warehousing is used for storing and processing huge amounts of data, it comprises an appropriate mechanism for assisting geoscientists in the study and treatment of earthquakes. Seismic data are recorded by seismographs, which are specialized devices for measuring vibrations of the ground due to seismic waves. Geoscientists also employ other means beyond seismic analysis for data collection, such as satellite imagery and geological inspections.

In this paper, a seismological DW is developed for the management and extraction of earthquake data through spatio-temporal queries, combined with thematic maps and data visualization tools. Statistical insights are derived, supporting both analysis and decision-making. Our approach adopts the snowflake schema while introducing a novel method to address many-to-many relationships between facts and dimensions without additional bridge tables. This work proposes a spatio-temporal DW for European seismic data that uniquely (i) integrates comprehensive spatial and temporal dimensions, (ii) introduces a novel method to handle many-to-many relationships without bridge tables, and (iii) supports scalable querying and visualization for decision-making. Furthermore, the proposed design is empirically evaluated against a conventional bridge-table schema, with results demonstrating performance gains for fact-centric queries and motivating a hybrid schema that balances efficiency across different queries.

The rest of the paper is organized as follows. Section 2 presents a survey of similar DW developments for natural disasters. Spatio-temporal DWs are presented in Section 3. Section 4 presents the case study in detail; specifically, the dataset, the DW architecture, and the ETL process. The experimental study is presented in Section 5, followed by the evaluation of schema efficiency in Section 6. Section 7 provides a discussion of the findings, including their implications, limitations, and potential extensions. The paper is concluded in Section 8.

2  Literature Review

Natural disasters generate a huge amount of data in every incident on a daily basis worldwide resulting in the need to store this information for analysis, data mining, decision making, and forecasting, among other topics. Data warehousing has provided a solution to this necessity.

In [3], a DW for landslides is presented, using landslide displacement as subject of analysis. The authors use a snowflake schema with a fact table for landslides, a geological dimension that refers to faults, a lithology dimension that contains sub-dimensions referring to rocks and formations, a slope dimension, and a time dimension. However, they do not present experimental results to evaluate their model. In [4], an extensive and in-depth literature survey on disaster detection, management, and prediction is attempted, providing knowledge on the topic, while they proposed a phased framework for disaster management database for India on Hadoop. The three most popular disasters were earthquakes, floods, and storms, while prediction was the most common topic among the literature.

Recent studies have continued to demonstrate how data-driven and warehouse-based approaches can support natural-disaster analysis. An urban-flood data warehouse integrating heterogeneous hydrological and meteorological datasets has been developed to predict flood depth under different rainfall return periods, combining deep-learning models with OLAP-style data organization [5]. Complementary research has reviewed deep-learning techniques for flood mapping, highlighting advances in large-scale data management, model training pipelines, and the integration of multisource geospatial information [6]. For landslide hazards, a multi-source data-integration framework has been proposed that employs distributed stream processing and load-balanced ETL to populate a central analytic warehouse for real-time monitoring and decision support [7]. Together, these studies illustrate recent progress toward scalable, intelligent data-management infrastructures for disaster forecasting and risk-mitigation systems.

Even though earthquakes and DWs are two widely researched subjects, DW development for seismological data is not so common among the literature.

In [8] a multi-dimensional DW is developed for earthquake prediction along Eurasian-Australian continental plates. In this approach, further denormalization of the date and time dimensions is shown. The date dimension includes year, month, and day while the time dimension includes hour, minute, and second, transforming the DW schema into a snowflake one. Apart from the date and time dimensions, the earthquake fact table is linked with the dimensions of coordinates, damaged radius, depth, felt radius, location, and Richter scale. This study revealed hidden intelligence allowing the extraction of meaningful insights, that can assist earthquake predictions more effectively.

A DW for seismological data is presented in [9], with the architecture of a spatial DW focusing on data mining, efficient data search, and OLAP analysis. In this work, a star schema is proposed with dimensions depth, geography, geology, intensity, magnitude, and time, and a fact table with measures on seismological data. The dimensions depth, intensity, and magnitude include ranges (from–to), which allowed the categorization of earthquakes. The schema allows spatio-temporal analysis and examples of typical queries are the retrieval of the epicenters of a certain region, with magnitude above a certain value that occurred in the past four months, or to find a number of strongest earthquakes that occurred within a specific distance from cities with certain number of inhabitants during the 20th century. This approach demonstrates that a star schema supports complex queries for analysis. A DW development for earthquake signal precursors in Italy can be found in [10], where a DW platform has been designed for earthquake precursor analysis by combining diverse data types such as INGV (Istituto Nazionale di Geofisica e Vulcanologia) data, MODIS (Moderate-Resolution Imaging Spectroradiometer) data, and VLF (Very Low Frequency) signal variations, along with geo-located data about weather and climatic parameters.

In [11], a DW is presented for the analysis of worldwide earthquakes from 1900 to 2011. The DW includes the dimensions Location, Time, and Seismic. The seismic dimension includes depth and magnitude attributes. The fact, called Seismic Impact, includes the following measures: the number of deaths, the number of injured persons, and the monetary losses. The authors apply different data mining techniques to the DW and conclude that most of the earthquakes concentrate (i) between the latitudes 63.22° ± 30° and between the longitudes 179.46° and 179.8°, (ii) between 4.45 and 6.4 (considering the Richter scale), (iii) between the 15th and 18th day of each month and before 4 p.m. in the morning, and (iv) around a depth of 63.71 km. They also conclude that (i) the higher the magnitude of an earthquake, the greater the human and monetary losses, (ii) it is possible to determine the depth of an earthquake based on its magnitude, and (iii) the Pacific Coast is an area with a high occurrence of earthquakes.

A more complete earthquake disaster mitigation system is presented in [12] containing a spatial DW to allow spatial data analysis. The system’s architecture includes four levels of data flow, i.e., data application, data collection, data presentation, and data processing. The spatial DW lays in the data processing level and is filled with geographic data, metadata, remote sensing images, and seismic data, among others, and serves the data application level for data mining, decision support disaster assessment, and earthquake prediction, among other topics.

Risk assessment in healthcare buildings [13] is also another case of DW development, where a constellation schema is proposed, including fact tables for facility negative parameter evaluation and risk assessment, and dimensions for address, city, date, facility, and negative parameters, among others.

In Table 1, the main features of the approaches discussed above are summarized in chronological order.

images

While the above studies demonstrate the applicability of DWs to natural disasters, existing seismological DWs remain limited in several respects. Some focus primarily on prediction [8] or mining [9] without addressing multidimensional schema challenges. Others emphasize specific precursors or metadata integration [10], or cover historical global datasets [11] without providing mechanisms for scalable spatio-temporal querying and visualization. Moreover, prior approaches typically rely on additional bridge tables to manage many-to-many relationships, increasing schema complexity and reducing query efficiency.

More recent studies have shifted toward complementary directions: Yamagishi et al. [14] applied spatio-temporal clustering to detect earthquake patterns, Susanta et al. [15] developed geovisual analytics for interactive exploration, and Zhu et al. [16] together with Bloemheuvel et al. [17] proposed scalable machine learning workflows for seismic data processing. Decision-support tools are also advancing, as illustrated by CAESAR II for seismic risk assessment in Italy [18] and operational earthquake forecasting frameworks [19]. In parallel, D’Amico et al. [20] introduced a scoring and ranking methodology for probabilistic seismic hazard models using macroseismic intensity data, underscoring the growing importance of model evaluation.

However, these contributions either focus on domain-specific analytics, forecasting, or visualization, and do not resolve key challenges in multidimensional schema design. In particular, existing approaches seldom address the complexity of many-to-many relationships between facts and dimensions, nor do they offer a unified spatio-temporal DW architecture optimized for both querying efficiency and decision-support visualization. This gap motivates the present work, which introduces a spatio-temporal DW for seismic data that integrates spatial and temporal dimensions, manages many-to-many relationships without bridge tables, and supports scalable querying and visualization for applied use in seismology and disaster management.

Our work advances the field by delivering a formal and conceptual spatio-temporal DW tailored to seismic data, distinguished by three key contributions: comprehensive integration of spatial and temporal dimensions, a new strategy for handling many-to-many relationships without bridge tables, and the incorporation of interactive visualization tools that enhance decision support. This combination of features highlights the distinctive contribution of our work in comparison with the existing literature.

3  Spatio-Temporal DW

3.1 Basic Concepts

A spatio-temporal DW (STDW) is a DW where both space and time are supported, thus it describes events that take place in a specific location and time period [21]. Most often, a STDW refers to moving objects, i.e., objects that change their position over time, such as tourists, animals, and vehicles. These DWs are called trajectory DWs (TDWs) since the moving object traces a trajectory over time [22]. However, there are other cases, apart from trajectories, where data about physical phenomena need to be stored in a DW, which are also distinguished by their spatial and temporal characteristics. A representative example is a seismological DW, since earthquakes are characterized by the geometry of the place and the time when they occurred. Temporal and spatial dimensions are included in the STDW where date, time, and several geometric features describing objects or incidents need to be stored.

Next, the definitions for our STDW are presented.

Definition 1 (STDW star schema): The star schema of a STDW with F the fact table, Dim1, Dim2, , Dimk dimension tables, SDim1, SDim2, , SDimm spatial dimension tables, and TDim1, TDim2, , TDimn temporal dimension tables, where k, m, n > 0, is defined as follows:

STDW_star_schema(F, Dim1, Dim2, …, Dimk, SDim1, SDim2, …, SDimm, TDim1, TDim2, …, TDimn).

By normalizing one or more dimension tables, the star schema is transformed to a snowflake schema where the levels of each dimension are organized into hierarchies. For example, in a geographical dimension the levels are City, Region, and Country, where City ⊏ Region and Region ⊏ Country. In general, each dimension regardless its type (spatial, temporal, etc.), has its hierarchy with n hierarchy levels, where n >= 0.

The corresponding definition of the STDW snowflake schema of Definition 1 is given next.

Definition 2 (STDW snowflake schema): The snowflake schema of a STDW with F the fact table, Dim1, Dim2, , Dimk dimension tables, SDim1, SDim2, , SDimm spatial dimension tables, and TDim1, TDim2, , TDimn temporal dimension tables, where k, m, n > 0, and DimHi (for i = 1 to k), SDimHj (for j = 1 to m), and TDimHp (for p = 1 to n) are dimension hierarchies of the Dimi, SDimj, and TDimp, respectively is defined as follows:

STDW_snowflake_schema(F, (Dim1, DimH1), (Dim2, DimH2), …, (Dimk, DimHk), (SDim1, SDimH1), (SDim2, SDimH2), …, (SDimm, SDimHm), (TDim1, TDimH1), (TDim2, TDimH2), …, (SDimn, SDimHn)).

3.2 Seismological STDW Architecture

A seismological STDW is presented in Fig. 1. The fact table (Earthquake) depicts seismic activity. It is distinguished by its spatio-temporal characteristics, such as date and time (temporal dimensions), location (a spatial dimension), and epicenter (a spatial measure, i.e., a metric related to spatial properties that can be quantified and measured). The schema is of snowflake type, where dimensions are normalized (3NF, BCNF).

images

Figure 1: STDW seismological snowflake schema array-based approach

Applying Definition 2, the snowflake schema is:

STDW_seismological_snowflake_schema(Earthquake, (MagnitudeType, DimHMagnitudeType = ∅), (Subduction, SDimHSubduction = ∅), (Fault, SDimHFault = ∅), (Location, SDimHLocation = {City ⊏ Region ⊏ Country}), (Time, TDimHTime = ∅), (Date, TDimHDate = ∅))

The attributes of dimensional tables Fault and Subduction are explained in Tables 2 and 3, respectively. We assume that all other tables have attributes that are self-explanotory.

images

images

3.3 Handling Many-to-Many Relationship between a Dimension and a Fact Table

In DW dimensional analysis, the fact table is usually associated to dimension tables through one-to-many relationships which ensures simplicity to the model. However, there are cases where a many-to-many relationship must be supported. So far, several approaches have been proposed with denormalization being the dominant one [23]. Although adding one or more bridge tables between the fact and the dimension table is a common solution to this problem [24], it increases the model complexity and data redundancy, and could slow down the query performance.

In this work, the many-to-many relationships between the fact and the dimension table is addressed in a completely different way. Our proposal is to treat the multivalued attribute of the fact table corresponding to different data values of the dimension table (primary key values) as an array, e.g., a value in the FaultIds attribute (of array datatype) of the fact table looks like this: {‘FRCF000’, ‘FRCF002’, ‘FRCF003’}, where ‘FRCF000’ is a value of the primary key attribute FaultId of the Fault dimension table. The approach is implemented with the assistance of functions and triggers, as it is explained next.

Initially, when an INSERT or UPDATE is performed on the Epicenter attribute, which is a geometry field (a point) of the fact table, a trigger BEFORE INSERT OR UPDATE executes the function fn_earthquake_fault_ids. This function selects all primary key values from the Fault dimension table of those polygons (FaultGeometry attribute) which contain the (new) epicenter point using the st_contains() function (we consider full spatial containment). Then, with the array_agg() function all these values become a multi value (an array), which is returned and inserted into the FaultIDs attribute (of array datatype) of the fact table. If there are no fault polygons containing the epicenter of the earthquake, the function will return an empty array of FaultIDs.

The function fn_earthquake_fault_ids() finds all the faults associated with the earthquake’s epicenter:

CREATE OR REPLACE FUNCTION fn_earthquake_fault_ids()

RETURNS TRIGGER

LANGUAGE plpgsql

AS

$$

BEGIN

 SELECT array_agg(FaultID)

 INTO new.FaultIDs

 FROM Fault

 WHERE st_contains(FaultGeometry, new.Epicenter);

 RETURN new;

END;

$$;

Code 1: Creation of function fn_earthquake_fault_ids() in PLpgSQL

Subsequently, the trigger trg_earthquake_fault_ids executes the function fn_earthquake_fault_ids().

CREATE OR REPLACE TRIGGER trg_earthquake_fault_ids

 BEFORE INSERT OR UPDATE OF epicenter

 ON earthquakeFact

 FOR EACH ROW

EXECUTE FUNCTION fn_earthquake_fault_ids();

Code 2: Creation of trigger trg_earthquake_fault_ids in PLpgSQL

4  Experimental Study

This section presents the development of our seismological STDW, and all the steps followed from the collection of the data, the design of the STDW, and the functions and associations deemed necessary to ensure data integrity.

4.1 Programming Tools

For the STDW development as well as the processing and loading of the data, the following tools were used: Linux Ubuntu 23.04, PostgreSQL version 15.3, PgAdmin 4 version 7.3, PostGIS version 3.3, Python version 3.11.2, and QGIS version 3.28.7.

PostgreSQL with PostGIS extension is open-source and one of the most popular DBMSs for hosting and analyzing geospatial data. With its first edition in 2001, it has many years of continuous updating and support of geospatial data, making it suitable for our STDW. PgAdmin is the official PostgreSQL database editor. The Python programming language is one of the most popular high-level languages and provides many easy functionalities to the ETL process. QGIS software is the most popular open-source GIS software, with a lot of features and capabilities for analyzing and processing geospatial data.

4.2 Description of the Dataset

The data used in this STDW are cities, countries, earthquakes from 2004 to 2023, regions, seismic faults, and subduction zones. They concern earthquakes in Europe, so a polygon with latitude coordinates from 30 to 80 degrees and longitude from −20 to 60 degrees was created, covering the entire European area.

The earthquake data were collected from the website of the European-Mediterranean Seismological Centre (https://emsc-csem.org). All earthquakes from 01 October 2004 to 31 May 2023, which fall inside the European polygon, were selected. The data were downloaded in csv files. The geometric data types used in the STDW are shown in Table 4.

images

All geometric data must be in the same reference system so that queries can be executed correctly. The reference system should cover all of Europe, so WGS ’84 (EPSG: 4326) was chosen. A point consists of two coordinates (longitude and latitude) and a polygon represents a closed polygon. A multi-polygon can contain more than one closed polygon and can also contain holes. It is suitable for countries that include islands or lakes as part of their territory.

Seismic fault data (Fig. 2) are part of the EMSF20 (European Fault-Source Model 2020) and were retrieved from the European Databases of Seismic Faults (https://seismofaults.eu), in GEOJSON format with descriptive information [25].

images

Figure 2: Seismic faults of the EFSM20 data (https://seismofaults.eu/efsm20data, accessed on 01 January 2025)

After selecting the data that are inside the European polygon, a total of 1248 seismic faults were imported into the STDW in the Fault table.

The subduction zone data are also part of the EFSM20 project, and GEOJSON format as well. They refer to parts of the tectonic plates which sink under their adjacent ones. This phenomenon is observed in the Mediterranean Sea, specifically West of Gibraltar, East of Sicily, and in the zone that starts South of the Peloponnese, continues South of Crete, and reaches South of Cyprus.

The data of the administrative boundaries of the countries were retrieved from the website of the European Statistical Service (Eurostat, https://ec.europa.eu/eurostat/data/database, accessed on 01 January 2025) and refer to the data package ‘Countries 2020’. Using the software QGIS 3.28.7, only the countries that were intersected by the European polygon were selected, a total of 73 countries. The descriptive information used from the ‘Countries 2020’ contains only the country code and the country name.

The data of the regions of the European countries were taken from the website of the Eurostat and refer to the NUTS 2021 level 2 data package (https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units/nuts, accessed on 01 January 2025). This classification (NUTS 2021) has been valid since 01 January 2021, and contains 92 regions at NUTS 1 level, 242 regions at NUTS 2 level, and 1166 regions at NUTS 3 level. Specifically: (i) NUTS level 1: main socio-economic areas, (ii) NUTS level 2: main regions, and (iii) NUTS level 3: smaller regions.

The data selected was from NUTS 2 level. We chose NUTS 2 because it refers to basic regions. In contrast, NUTS 3 refers to smaller areas, with a high level of detail for earthquakes at the European level. NUTS 1, on the other hand, refers to large geographical areas, thus losing a lot of information in case the analysis of earthquakes taking place at country level. The descriptive information selected from this data set is the country code (which correlates with the country code of the ‘Countries 2020’), the name of the region, and the region code.

Country cities data (Fig. 3) are retrieved from ESRI (World Cities, https://datacore-gn.unepgrid.ch/geonetwork, accessed on 01 January 2025) and include the locations of the world’s major cities. Cities include landmark cities, major population centers, national capitals, and provincial capitals. The layer is in GEOJSON format and for selecting only the cities of Europe, the software QGIS 3.28.7 was used, where only the cities that intersect with the polygons of the regions were selected, a total of 507 cities, as shown in the image below (Fig. 3)

images

Figure 3: ESRI Cities Data (https://hub.arcgis.com/datasets/schools-BE::world-cities/explore, accessed on 01 January 2025)

The descriptive information selected from these data was only the city name. Data about types of earthquake magnitudes were collected according to the formulas given by the United States Geological Survey (USGS, https://www.usgs.gov), and [26,27].

Table 5 presents the cardinalities of the tables.

images

4.3 Dimension Tables Population

The Date dimension should include dates that cover all the timestamps of the earthquake data, thus the table was populated using Python code with dates starting from 01 January 2000 to 31 December 2023. Similarly, the Time dimension was also populated using Python code. For the City, Country, Fault Region, and Subduction dimensions, where GEOJSON files had to be imported into the respective tables, in addition to the Python code, the PostGIS functions ST_GeomFromGeoJSON() and ST_SetSRID() were used to convert GEOJSON coordinates to geometry and set the geometry to the EPSG 4326 reference system.

ST_SetSRID(ST_GeomFromGeoJSON(%s), 4326)

Code 3: Convert GEOJSON coordinates to geometry in Python

4.4 ETL Process

The earthquake data were downloaded from https://emsc-csem.org in csv files per 5000 records and their structure is shown in Table 6.

images

The Region Name and Last Update columns were removed from the above data. The Region Name is redundant since it can be found more precisely in the City-Region-Country hierarchy and many regions are simply descriptive (e.g., BULGARIA-GREECE-TURKEY BORDER RG). Last Update is the most recent update for the earthquake in the EMSC system, information that is not useful for current earthquake data analysis. The Date column was transformed with the Replace() function from the YYYY-MM-DD format to the YYYYMMDD format to match the primary key of the Date dimension table.

date_id = Date.Replace(“-”, “”)

Code 4: Convert date format in Python

Similarly, the Time column was transformed from HH:MM:SS to HHMMSS to match the primary key of the Time dimension table.

time_id = Time.Replace(“:”, “”)

Code 5: Convert time format in Python

The columns Latitude and Longitude refer to the latitude and longitude, respectively of each point and were used to generate the geometry of the epicenter of the earthquake with the functions ST_SetSRID() which sets the reference system (EPSG: 4326) of the geometry and ST_MakePoint() which creates the geometry in the Earthquake fact table.

ST_SetSRID(ST_MakePoint(Longitude, Latitude), 4326)

Code 6: Create geometry from coordinates in PLpgSQL

It was also necessary to clean data on the Magnitude Type column so that all of its values match the primary keys of the MagnitudeType dimension table. Specifically, data cleaning was performed with the following piece of code, which defines missing values as None and replaces the values ‘MD’, ‘M’, ‘M’, ‘MC’, ‘ml’ with the values ‘Md’, ‘M’, ‘M’, ‘Mc’ and ‘ML’, respectively.

if pd.isna(row [6]):

magnitude_type = None

elsif row [6] == “MD”:

magnitude_type = row [6].replace(“MD”, “Md”)

elsif row [6] == “M”:

magnitude_type = row [6].replace(“M”, “M”)

elsif row [6] == “M”:

magnitude_type = row [6].replace(“M”, “M”)

elsif row [6] == “MC”:

magnitude_type = row [6].replace(“MC”, “Mc”)

elsif row [6] == “ml”:

magnitude_type = row [6].replace(“ml”, “ML”)

else:

magnitude_type = row [6]

Code 7: Cleaning of magnitude_type values in Python

A number of functions and triggers were also developed to automate some operations during data load for the computation of data that does not exist and needs to be calculated. This procedure ensures data quality by avoiding cases where the user enters wrong data. These functions were fn_nearest_city(), which finds the nearest city to an earthquake, calculates its distance from the epicenter of an earthquake, and determines the relationship between earthquakes and cities, fn_cities_region_id(), which finds the correct region_id of each city and establishes the relationship between cities and regions, fn_earthquales_subduction_id(), which finds the subduction ID of each earthquake and determines the relationship between earthquakes and subductions, and finally, fn_earthquake_fault_ids() which is covered in subsection. These functions were defined using the PostGIS spatial functions ST_Within, ST_Expand, ST_Distancesphere, and ST_Contains. The PLpgSQL code is omitted here for reasons of space.

5  Seismological Data Management

5.1 SQL Queries

The STDW was designed to support from basic to more complex SQL queries across space and time. To show the capabilities of our STDW, the following queries were selected:

Basic aggregate queries

1.   Percentage of earthquakes by country.

2.   Percentage of earthquakes on faults by country.

3.   Twenty most earthquake-prone cities.

Spatial queries

4.   Earthquakes located inside a fault.

5.   Earthquakes located within a subduction zone.

6.   Percentage of earthquakes within a fault and percentage of earthquakes within a subduction zone.

Temporal queries

7.   Total and largest earthquakes per year.

8.   Total and largest earthquakes per month.

9.   Total and largest earthquakes per season.

Spatio-temporal queries

10.   Ten largest earthquakes near the Greek city of Patras in the last decade.

11.   The epicenters of the ten largest earthquakes in 2023.

12.   For queries 8 and 9, the earthquakes of the years 2004 and 2023 were not considered because their data do not cover the entire year. Results of queries 1–5 and 10–11 will be presented on thematic maps, query 6 in a table, and queries 7–9 in charts.

5.2 Data Visualization: Results of the Queries

The results of the SQL queries are presented in this section. Queries 1–5, and 10–11 were extracted as GeoJSON from the STDW and then, using QGIS software converted into thematic maps as shown below.

Basic aggregate queries

The results of Query 1, see Fig. 4, show the percentage of earthquakes that occurred in each country, where the Google Satellite image map was used as a background map.

images

Figure 4: Results of Query 1—Percentage of earthquakes by country

The results of Query 2, see Fig. 5, show the percentage of earthquakes on faults by country, where the Google Satellite image map was used as a background map.

images

Figure 5: Results of Query 2—Percentage of earthquakes on faults by country

The results of Query 3, see Fig. 6, show the 20 most earthquake-prone cities in Europe, where the Google Satellite image map was used as a background map.

images

Figure 6: Results of Query 3—20 most earthquake-prone cities in Europe

Spatial queries

The results of Query 4, see Fig. 7, show the earthquakes in orange, which are located inside a fault. The Open Street Maps map was used as a background map.

images

Figure 7: Results of Query 4—Earthquakes on faults

The results of Query 5, see Fig. 8, show the earthquakes which are located within a subduction zone. Τhe Google Satellite image map was used as a background map.

images

Figure 8: Results of Query 5—Earthquakes within subduction zones

The results of Query 6, see Table 7, show that 10.75% of all earthquakes are located inside a subduction zone, while 19.85% of earthquakes are located inside a seismic fault.

images

Temporal queries

The results of Query 7, see Fig. 9, show the total of earthquakes and the largest earthquake per year.

images

Figure 9: Result of Query 7—Earthquakes per year

The results of Query 8, see Fig. 10, show the total of earthquakes and the largest earthquake per month.

images

Figure 10: Result of Query 8—Earthquakes per month

The results of Query 9, see Fig. 11, show the total of earthquakes and the largest earthquake per season.

images

Figure 11: Results of Query 9—Earthquakes per season

Spatio-temporal queries

The results of Query 10, see Fig. 12, show the largest earthquakes occurred near the city of Patras in Greece with green color. The Open Street Maps was used as a background map.

images

Figure 12: Results of Query 10—The 10 largest earthquakes of the last decade near the city of Patras in Greece

The results of Query 11, see Fig. 13, show the epicenters of the 10 largest earthquakes that have occurred in Europe in 2023, where the Google Satellite image map is used as a background map.

images

Figure 13: Results of Query 11—Epicenters of the ten largest earthquakes in 2023

6  Evaluation of Schema Efficiency

This section presents a comparative performance evaluation of two schema designs for managing many-to-many relationships between the Earthquake fact table and the Fault dimension. The first design adopts the denormalized array-based representation of foreign keys, as described in Section 3.2 and is presented in Fig. 1, whereas the second employs a conventional intermediate bridge table (Fig. 14). The bridge table contains only two atributes, EarthquakeID and FaultID, where each row represents a unique earthquake-fault association. Consequently, if an earthquake is linked to three distinct faults, the bridge table will contain three corresponding rows.

images

Figure 14: STDW seismological snowflake schema bridge table approach

To assess relative efficiency, the two fault-related queries presented in Section 5 were executed on both schemas, namely Query 2, which examines the percentage of earthquakes on faults by country, and Query 4, which lists the earthquakes located inside a fault. Each query was executed fifty times on both schemas in PostgreSQL 15.3, and execution times were measured using the EXPLAIN ANALYZE command. Table 8 summarizes the average runtimes and the main operations for each design.

images

The results indicate that the array-based schema significantly outperforms the bridge-table design for fact-centric queries such as Query 4, where the presence of related dimension records can be verified directly within the fact table (approximately 2 times improvement). In contrast, for dimension-centric queries such as Query 2, the bridge table schema delivers substantially better performance (nearly 4 times faster), as the array-based approach requires expansion and evaluation of multivalued attributes during joins. The comparative analysis underscores that no single schema design dominates across all queries. The array-based representation minimizes join operations and improves performance for fact-oriented aggregations, while the bridge-table schema remains more efficient for dimension-driven queries requiring extensive joins.

To reconcile these trade-offs, a hybrid schema was adopted that leverages the advantages of both designs. In the hybrid approach, the fact table retains an array attribute (FaultIDs) for efficient fact-centric operations, while a conventional bridge table simultaneously preserves the normalized many-to-many relationship (Fig. 15). This dual representation enables the DW to optimize performance across diverse query patterns, effectively balancing computational efficiency with schema flexibility. The main limitation of this hybrid approach is data redundancy, as both the array attribute and the bridge table store equivalent data. This redundancy introduces a modest storage overhead; however, given the declining cost of storage and the increasing importance of computational efficiency in modern, cloud-based DW deployments, the trade-off is deemed acceptable. Future research may investigate adaptive strategies for selectively materializing one representation based on workload characteristics, thereby reducing redundancy while retaining performance benefits.

images

Figure 15: STDW seismological snowflake schema hybrid approach

7  Discussion

Spatio-temporal features provide semantic enrichment to descriptive attributes of objects and phenomena, reflecting not only their static characteristics but also their evolution across space and time. Within data warehousing, these features support the representation of moving object trajectories and spatio-temporal events that unfold within geographic regions over time, such ascyclones, earthquakes, and floods.

The developed STDW responds to seismological data analysis and management issues while guaranteeing the integrity of complex data in different use cases. The multidimensional approach to geospatial data allows the analysis of data from different perspectives and facilitates their visualization in thematic maps. Positioned as a proof-of-concept, the study demonstrates how such a framework can be implemented, queried, and visualized using real-world datasets, establishing its feasibility for large-scale seismological data management. The development of the STDW in PostgreSQL provides the ability to execute geospatial queries and allows scalability to include other continents in the same schema.

In Section 5, several different types of queries were executed, and their results have been shown visually either in charts, maps, or tables. The queries include simple aggregate functions, spatial or temporal data analysis as well as spatio-temporal analysis of data.

The execution of Queries 1 and 2 reveals meaningful spatial patterns in seismic activity. In particular, a clear correlation emerges between their outputs for Greece, Italy, and Turkey, regions that consistently exhibit higher frequencies of earthquakes associated with fault structures. In particular, higher rates of earthquakes are observed in countries that have seismic faults, e.g., the rates of seismic faults per country are: Turkey 32%, Greece 26%, Italy 19%, Albania 9%, Spain 5%, with similar rates of earthquakes in these countries: Turkey 46%, Greece 21%, Italy 10%, Albania 3%, Spain 6%. This alignment underscores the reliability of the DW in capturing established seismological characteristics of tectonically active zones. By enabling comparisons across multiple spatial dimensions, the system demonstrates its capacity to support both exploratory analysis and validation of known seismic trends, thereby reinforcing its potential as a decision-support tool for regional risk assessment.

The results of Query 6 show that the probability of an earthquake occurring in an area with a seismic fault is almost twice the corresponding probability in an area with a subduction zone.

From the results of Queries 7 and 8 it is concluded that the percentages of earthquakes per year and month are not related to the corresponding largest earthquake. In fact, the magnitude of the largest earthquake per year and month does not vary greatly over the years. Note that time and season do not affect the genesis of earthquakes as shown by the results of Query 9. Finally, the results of Query 11 show that the epicenters of the ten largest earthquakes that have occurred in Europe in 2023 are located in Turkey. It is worth emphasizing that the Mediterranean countries are the most seismic in Europe, having the highest percentage of earthquakes, seismic faults, and subduction zones which was confirmed by the data.

The combination of these seismological data with other related data, such as the number of buildings that were damaged or destroyed, and especially the number of people who were injured or lost their lives, would help experts and governments to take decisions and measures to protect them.

Beyond demonstrating analytical capabilities, the framework also introduces methodological innovations in schema design. A key innovation of this work lies in the array-based strategy for managing many-to-many relationships between facts and dimensions, which reduces schema complexity and avoids the join overhead typical of bridge tables. The comparative performance study reinforces this contribution. Results showed that the array-based schema improves runtime efficiency for fact-centric queries by minimizing joins, whereas the conventional bridge-table schema remains advantageous for dimension-centric queries. To reconcile these contrasting strengths, a hybrid schema was introduced that retains both representations. This design balances efficiency, ensuring that the STDW can support diverse analytical tasks without compromising performance.

Taken together, these design choices underscore that the framework’s significance resides in providing the technical foundation upon which new seismological insights can be systematically derived. By enabling spatial, temporal, and spatio-temporal queries and visualizing outputs through thematic maps and analytical charts, the system demonstrates its potential as a decision-support tool. Moreover, the DW could be extended with additional datasets, such as structural vulnerability indicators or socioeconomic exposure metrics, facilitating applications in seismic engineering, disaster risk reduction, and emergency management.

Several limitations should be acknowledged. The dual representation in the hybrid schema introduces redundancy and modest storage overhead, though this trade-off is increasingly acceptable in modern cloud-based environments. Querying arrays can also be less intuitive than conventional joins, and ensuring referential integrity requires careful implementation of triggers and functions. In addition, the approach currently relies on PostgreSQL-specific features, which may limit portability to other platforms. Future research should address these issues by exploring strategies to improve portability, optimize performance for very large arrays, and reduce maintenance complexity. Hybrid mechanisms that combine the simplicity of arrays with stronger integrity constraints and cross-platform compatibility may offer a balanced path forward.

Finally, the framework can complement existing decision-support systems, such as CAESAR II [18] and operational forecasting platforms [19], by providing a scalable infrastructure capable of integrating heterogeneous seismic and geospatial data. Overall, the findings demonstrate both the feasibility and efficiency of the proposed STDW, positioning it as a foundational component for big data-driven intelligent decision systems in disaster resilience and risk mitigation.

8  Conclusion

The contribution of DWs to the science of seismology is particularly demonstrated by this work, since the possibility of querying seismological data for analysis results in an improvement in the living and working conditions of millions of people worldwide who live daily with the fear of seismic vibrations and the subsequent risk of losing their lives and properties.

A STDW framework for seismic data was introduced, addressing limitations in existing earthquake-oriented DWs. The contribution of this work can be summarized as follows: it (i) integrates comprehensive spatial and temporal dimensions, (ii) introduces and evaluates an array-based method for handling many-to-many relationships without bridge tables, demonstrating clear efficiency gains for fact-centric queries, and (iii) supports advanced querying and visualization to inform data-driven decision-making. To further optimize efficiency, a hybrid schema was proposed that combines array-based and bridge-table designs, offering balanced performance across fact- and dimension-oriented queries. These contributions highlight the novelty and practical value of the proposed STDW as a foundation for big data-driven intelligent decision systems in disaster resilience and emergency management.

Beyond its technical contribution, the proposed STDW also has potential value for applied domains. In seismic engineering, integrating structural vulnerability data with the DW could support the assessment of building resilience against seismic hazards. For disaster risk reduction, the framework could enable multidimensional analyses that combine seismic events with population density, land use, or critical infrastructure exposure, thereby informing urban planning and mitigation strategies. In emergency management, the ability to execute spatio-temporal queries in near real-time could strengthen decision-making for evacuation planning, resource deployment, and rapid response. Although the present study is positioned as a proof-of-concept, these examples highlight the practical pathways through which the proposed framework could evolve into an applied decision-support system.

Future work could expand the scope of the proposed STDW in several directions. Beyond extending the system to global seismic datasets, further research could integrate multi-hazard information such as volcanic activity or tsunamis, as well as socioeconomic indicators of disaster impact. Incorporating real-time data streams and advanced analytics, including machine learning models for prediction, would further enhance the DW’s decision-support capabilities. Finally, exploring cloud-based implementations could improve scalability and facilitate collaborative use across scientific and governmental organizations.

Acknowledgement: Not applicable.

Funding Statement: The authors received no specific funding for this study.

Author Contributions: The authors confirm contribution to the paper as follows: conceptualization, Georgia Garani and George Pramantiotis; methodology, Georgia Garani and George Pramantiotis; software, George Pramantiotis; validation, Georgia Garani, George Pramantiotis and Francisco Javier Moreno Arboleda; formal analysis, Georgia Garani, George Pramantiotis and Francisco Javier Moreno Arboleda; investigation, George Pramantiotis; resources, George Pramantiotis; data curation, George Pramantiotis; writing—original draft preparation, Georgia Garani; writing—review and editing, George Pramantiotis and Francisco Javier Moreno Arboleda; visualization, George Pramantiotis; supervision, Georgia Garani and Francisco Javier Moreno Arboleda; project administration, Georgia Garani. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: The data that support the findings of this study are openly available in European-Mediterranean Seismological Centre at https://emsc-csem.org, European Databases of Seismic Faults at https://seismofaults.eu, European Statistical Service at https://ec.europa.eu/eurostat/data/database (accessed on 01 January 2025), Eurostat NUTS 2021 Level 2 at https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units/nuts (accessed on 01 January 2025), ESRI World Cities at https://datacore-gn.unepgrid.ch/geonetwork (accessed on 01 January 2025), United States Geological Survey at https://www.usgs.gov.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.

References

1. Hu Q, Xiong F, Zhang B, Su P, Lu Y. Developing a novel hybrid model for seismic loss prediction of regional-scale buildings. Bull Earthq Eng. 2022;20(11):5849–75. doi:10.1007/s10518-022-01415-x. [Google Scholar] [CrossRef]

2. Rossi L, Holtschoppen B, Butenweg C. Official data on the economic consequences of the 2012 Emilia-Romagna earthquake: a first analysis of database SFINGE. Bull Earthq Eng. 2019;17(9):4855–84. doi:10.1007/s10518-019-00655-8. [Google Scholar] [CrossRef]

3. Ji M, Jin F, Zhao X, Ai B, Li T. Mine geological hazard multi-dimensional spatial data warehouse construction research. In: Proceedings of the 2010 18th International Conference on Geoinformatics; 2010 Jun 18–20; Beijing, China. doi:10.1109/GEOINFORMATICS.2010.5567673. [Google Scholar] [CrossRef]

4. Goswami S, Chakraborty S, Ghosh S, Chakrabarti A, Chakraborty B. A review on application of data mining techniques to combat natural disasters. Ain Shams Eng J. 2018;9(3):365–78. doi:10.1016/j.asej.2016.01.012. [Google Scholar] [CrossRef]

5. Wu Z, Zhou Y, Wang H, Jiang Z. Depth prediction of urban flood under different rainfall return periods based on deep learning and data warehouse. Sci Total Environ. 2020;716(2):137077. doi:10.1016/j.scitotenv.2020.137077. [Google Scholar] [PubMed] [CrossRef]

6. Bentivoglio R, Isufi E, Jonkman SN, Taormina R. Deep learning methods for flood mapping: a review of existing applications and future research directions. Hydrol Earth Syst Sci. 2022;26(16):4345–78. doi:10.5194/hess-26-4345-2022. [Google Scholar] [CrossRef]

7. Wang Z, Liang H, Yang H, Li M, Cai Y. Integration of multi-source landslide disaster data based on flink framework and APSO load balancing task scheduling. ISPRS Int J Geo Inf. 2025;14(1):12. doi:10.3390/ijgi14010012. [Google Scholar] [CrossRef]

8. Nimmagadda SL, Dreher H. Ontology based data warehouse modeling and mining of earthquake data: prediction analysis along Eurasian-Australian continental plates. In: Proceedings of the 2007 5th IEEE International Conference on Industrial Informatics; 2007 Jun 23–27; Vienna, Austria. doi:10.1109/INDIN.2007.4384825. [Google Scholar] [CrossRef]

9. Marketos G, Theodoridis Y, Kalogeras IS. Seismological data warehousing and mining: a survey. Int J Data Warehous Min. 2008;4(1):1–16. doi:10.4018/jdwm.2008010101. [Google Scholar] [CrossRef]

10. Biagi PF, Guaragnella C, Guerriero A, Pasquale CC, Ragni F. A data warehouse for earthquakes signal precursors analysis. In: Proceedings of the 2009 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems; 2009 Sep 25; Crema, Italy. doi:10.1109/EESMS.2009.5341316. [Google Scholar] [CrossRef]

11. Somodevilla MJ, Priego AB, Castillo E, Pineda IH, Vilariño D, Nava A. Decision support system for seismic risks. J Comput Sci Technol. 2012;12(2):71–7. [Google Scholar]

12. Chi HY, Liu X, Xu XD. A framework for earthquake disaster mitigation system. In: Proceedings of International Conference on Information Systems for Crisis Response and Management (ISCRAM); 2011 Nov 25–27; Harbin, China. doi:10.1109/ISCRAM.2011.6184045. [Google Scholar] [CrossRef]

13. Özcan M, Peker S. Designing a data warehouse for earthquake risk assessment of buildings: a case study for healthcare facilities. Sakarya Univ J Comput Inf Sci. 2021;4(1):156–65. doi:10.35377/saucis.04.01.872729. [Google Scholar] [CrossRef]

14. Yamagishi Y, Saito K, Hirahara K, Ueda N. Spatio-temporal clustering of earthquakes based on distribution of magnitudes. Appl Netw Sci. 2021;6(1):71. doi:10.1007/s41109-021-00413-3. [Google Scholar] [CrossRef]

15. Susanta FF, Pratama C, Aditya T, Khomaini AF, Abdillah HWK. Geovisual analytics of spatio-temporal earthquake data in Indonesia. J Geospat Inf Sci Eng. 2019;2(2):185–94. doi:10.22146/jgise.51131. [Google Scholar] [CrossRef]

16. Zhu W, Hou AB, Yang R, Datta A, Mousavi SM, Ellsworth WL, et al. QuakeFlow: a scalable machine-learning-based earthquake monitoring workflow with cloud computing. arXiv:2208.14564. 2022. [Google Scholar]

17. Bloemheuvel S, van den Hoogen J, Jozinović D, Michelini A, Atzmueller M. Graph neural networks for multivariate time series regression with application to seismic data. arXiv:2201.00818. 2022. [Google Scholar]

18. Zuccaro G, Perelli FL, De Gregorio D, Masi D. Caesar II: an Italian decision support tool for seismic risk. In: Proceedings of the COMPDYN 2021, 8th International Conference on Computational Methods in Structural Dynamics and Earthquake Engineering; 2021 Jun 28–30; Athens, Greece. p. 2659–77. doi:10.7712/120121.8665.19152. [Google Scholar] [CrossRef]

19. Huang C, Bolin H, Refsum V, Meslem A. Using operational earthquake forecasting tool for decision making: a synthetic case study. In: Proceedings of the EGU General Assembly 2022; 2022 May 23–27; Vienna, Austria. doi:10.5194/egusphere-egu22-3194. [Google Scholar] [CrossRef]

20. D’Amico V, Visini F, Rovida A, Marzocchi W, Meletti C. Scoring and ranking probabilistic seismic hazard models: an application based on macroseismic intensity data. Nat Hazards Earth Syst Sci. 2024;24(4):1401–13. doi:10.5194/nhess-24-1401-2024. [Google Scholar] [CrossRef]

21. Garani G, Cassavia N, Savvas IK. An application of an intelligent data warehouse for modelling spatiotemporal objects. Int J Big Data Intell Appl. 2020;1(1):36–57. doi:10.4018/ijbdia.2020010103. [Google Scholar] [CrossRef]

22. Garani G, Tolis D, Savvas IK. A trajectory data warehouse solution for workforce management decision-making. Data Sci Manag. 2023;6(2):88–97. doi:10.1016/j.dsm.2023.03.002. [Google Scholar] [CrossRef]

23. Kimball R, Ross M. The data warehouse toolkit: the definitive guide to dimensional modeling. 3rd ed. Hoboken, NJ, USA: John Wiley & Sons; 2013. [Google Scholar]

24. Song IY, Rowen W, Medsker C, Ewen E. An analysis of many-to-many relationships between fact and dimension tables in dimensional modeling. In: Proceedings of the International Workshop on Design and Management of Data Warehouses (DMDW 2001); 2001 Jun 4; Interlaken, Switzerland. p. 1–13. [Google Scholar]

25. Basili R, Danciu L, Beauval C, Sesetyan K, Vilanova S, Adamia S, et al. European fault-source model 2020 (EFSM20online data on fault geometry and activity parameters [Dataset]. Roma, Italy: Istituto Nazionale di Geofisica e Vulcanologia; 2022 [cited 2025 Jan 1]. Available from: 10.13127/efsm20. [Google Scholar] [CrossRef]

26. Bormann P. Earthquake, magnitude. In: Encyclopedia of solid earth geophysics. Dordrecht, The Netherlands: Springer Netherlands; 2011. p. 207–18. doi:10.1007/978-90-481-8702-7_3. [Google Scholar] [CrossRef]

27. Bormann P, Wendt S, DiGiacomo D. Seismic sources and source parameters. In: New manual of seismological observatory practice 2 (NMSOP2). Potsdam, Germany: Deutsches GeoForschungsZentrum GFZ; 2013. p. 1–259. doi:10.2312/GFZ.NMSOP-2_ch3. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Garani, G., Pramantiotis, G., Arboleda, F.J.M. (2026). Spatio-Temporal Earthquake Analysis via Data Warehousing for Big Data-Driven Decision Systems. Computers, Materials & Continua, 86(3), 85. https://doi.org/10.32604/cmc.2025.071509
Vancouver Style
Garani G, Pramantiotis G, Arboleda FJM. Spatio-Temporal Earthquake Analysis via Data Warehousing for Big Data-Driven Decision Systems. Comput Mater Contin. 2026;86(3):85. https://doi.org/10.32604/cmc.2025.071509
IEEE Style
G. Garani, G. Pramantiotis, and F. J. M. Arboleda, “Spatio-Temporal Earthquake Analysis via Data Warehousing for Big Data-Driven Decision Systems,” Comput. Mater. Contin., vol. 86, no. 3, pp. 85, 2026. https://doi.org/10.32604/cmc.2025.071509


cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 363

    View

  • 92

    Download

  • 0

    Like

Share Link