Kernel Density Visualization (KDV) has been extensively used for many geospatial analysis tasks (Heatmap). Some representative examples include traffic accident hotspot detection, crime hotspot detection, and disease outbreak detection. Although many scientific software packages, including Scipy, Statmodels, and Scikit-learn, geographical software packages, including QGIS and ArcGIS, and visualization software packages, including Deck.gl and KDV-Explorer, can also support KDV, none of these tools, to the best of our knowledge, can be scalable to high resolution sizes (e.g., 1280 x 960) and large-scale datasets (e.g., one million data points). Therefore, the huge computational cost limits the applicability of using the off-the-shelf software tools to support advanced (or more complex) geospatial analytics, e.g., bandwidth-tuning analysis and spatiotemporal analysis, which involves computing multiple KDVs in one batch.
Macau COVID-19 HotSpot Map: https://degroup.cis.um.edu.mo/covid-19/#
Hong Kong COVID-19 HotSpot Map: https://covid19.comp.hkbu.edu.hk/
If you use this library and working systems for your research studies, please cite the articles [4] (bibtex) and [7] (bibtex).
To overcome the inefficiency issue of KDV, we develop the first versatile programming library (LIBKDV) [4], by combining our recent studies (SLAM [3] and SWS [5]), which can reduce the worst-case time complexity for supporting different types of KDV-based geospatial analytics, including:
(1) Bandwidth-tuning analysis (cf. Figure 1): Domain experts can first set multiple bandwidths in a batch, and then generate multiple KDVs with respect to these bandwidths.
(2) Spatiotemporal analysis (cf. Figure 2): Domain experts can leverage a more complex spatiotemporal kernel density function to generate time-dependent hotspot maps that correspond to different timestamps.
To further enhance the efficiency for these two tasks, we fully parallelize our methods, SLAM and SWS.
(for Win64, Linux, and MacOS)conda create -n libkdv python=3.9
conda install -c conda-forge geopandas
conda install -c conda-forge keplergl==0.3.2
import libkdv
import pandas as pd
Example for computing a single KDV:
NewYork = pd.read_csv('./Datasets/New_York.csv')
traffic_kdv = libkdv.kdv(NewYork,KDV_type="KDV",bandwidth=1000)
traffic_kdv.compute()
print(traffic_kdv.result)
Example for supporting the bandwidth-tuning analysis task:
bandwidths_traffic_kdv = [500,700,900,1100,1300,1500,1700,1900,2100,2300] #Set the bandwidths
result_traffic_kdv = [] #Stores the final results
traffic_kdv = kdv(NewYork,KDV_type="KDV")
for band in bandwidths_traffic_kdv:
kdv_traffic_kdv.bandwidth = band
result_traffic_kdv.append(traffic_kdv.compute())
Example for supporting the spatiotemporal analysis task:
NewYork = pd.read_csv('./Datasets/New_York.csv')
traffic_kdv = kdv(NewYork,KDV_type="STKDV",bandwidth=1000,bandwidth_t=10)
traffic_kdv.compute()
print(traffic_kdv.result)
Description of all the parameters
libkdv_obj = libkdv.kdv(dataset, KDV_type='STKDV',
GPS=True,
bandwidth=1000, row_pixels=800, col_pixels=640,
bandwidth_t=6, t_pixels=32,
num_threads=8)
Required arguments
dataset: Pandas object, the dataset. (for preparation, please refer to the steps in data_processing.ipynb)
KDV_type: String, "KDV" - single KDV or "STKDV" - Spatio-Temporal KDV.
Optional arguments
GPS: Boolean, true *- use geographic coordinate system * or false - use simple (X, Y) coordinates (evaluation.ipynb).
bandwidth: Float, the spatial bandwidth (in terms of meters), default is 1000.
row_pixels: Integer, the number of grids in the x-axis, default is 800.
col_pixels: Integer, the number of grids in the y-axis, default is 640.
bandwidth_t: Float, the temporal bandwidth (in terms of days), default is 6. REQUIRED if KDV_type="STKDV".
t_pixels: Integer, the number of grids in the t-axis, default is 32. REQUIRED if KDV_type="STKDV".
num_threads: Integer, the number of threads, default is 8.
from keplergl import KeplerGl
map_traffic_kdv = KeplerGl(height=600, data={"data_1": traffic_kdv.result})
map_traffic_kdv
To support the bandwidth-tuning analysis task, you can use the following code.
from keplergl import KeplerGl
map_traffic_kdv_bands = KeplerGl(height=500)
for i in range(len(bandwidths_traffic_kdv)):
map_traffic_kdv_bands.add_data(data=result_traffic_kdv[i], name='data_%d'%(i+1))
map_traffic_kdv_bands
We offer five sample datasets for testing, which are (1) Atlanta crime dataset [a], (2) Seattle crime dataset [b], (3) New York traffic accident dataset [c], (4) Hong Kong COVID-19 dataset [d], and (5) China Hainan Sanya taxi dataset [e]. The python code (data_processing.py) and the Jupyter notebook (data_processing.ipynb) for extracting these datasets are provided in this Github link.
[a] Atlanta Open Data. http://opendata.atlantapd.org/.
[b] Seattle Open Data. https://data.seattle.gov/Public-Safety/SPD-Crime-Data-2008-Present/tazs-3rd5.
[c] NYC Open Data. https://data.cityofnewyork.us/Public-Safety/Motor-Vehicle-Collisions-Crashes/h9gi-nx95.
[d] Hong Kong Open Data. https://geodata.gov.hk/gs/view-dataset?uuid=d4ccd9be-3bc0-449b-bd27-9eb9b615f2db&sidx=0.
[e] Hainan Sanya taxi Data. https://github.com/libkdv/libkdv/blob/main/hainan-sanya-taxi.csv.
There are three main advantages for using our LIBKDV.
Easy-to-use software package: Domain experts only need to write a few lines of python codes for using our LIBKDV, which is as easy as using other python packages, including Scikit-learn and Scipy.
High efficiency: LIBKDV is the first library that can reduce the worst-case time complexity for generating KDV, which cannot be achieved by other software tools. Here, we also conduct the experiment in the Seattle crime dataset for comparing the efficiency of different python packages to generate KDV. In this experiment, we fix the resolution size to be 1280 x 960 and sample this dataset with different percentages. Observe from Figure 3 that all the existing libraries, including Scipy, Scikit-learn, and Statsmodels, take at least 100 seconds for generating a single KDV even we sample only 1% of data points in this dataset. Compared with these packages, LIBKDV only takes less than 10 seconds, which is more scalable, for generating KDV. Therefore, instead of calling the KDV function in other python packages, domain experts can call our efficient KDV function in LIBKDV.
Figure 3: Response time of different python libraries for generating KDV in the Seattle dataset, varying the dataset size.
High versatility: Due to the high efficiency of LIBKDV, our library can support more KDV-based geospatial analysis tasks, including bandwidth-tuning analysis (cf. Figure 4) and spatiotemporal analysis (cf. Figure 5), which cannot be natively and feasibly supported by other software tools.
Figure 4: Bandwidth-tuning analysis for the New York traffic accident dataset.
Figure 5: Spatiotemporal analysis for the Hong Kong COVID-19 dataset.
In this Github link, we also provide three Jupyter notebooks, namely Demo_single_KDV.ipynb, Demo_KDV_bandwidth.ipynb, and Demo_STKDV.ipynb, which can support generating a single KDV, bandwidth-tuning analysis, and spatiotemporal analysis, respectively. Interested users can download these Jupyter Notebooks for testing our library. Please also refer to the demonstration video for more details.
Prof. (Edison) Tsz Nam Chan, Shenzhen University
Mr. Pak Lon Ip, Universiy of Macau
Mr. Kaiyan Zhao, Universiy of Macau
Prof. (Ryan) Leong Hou U, Universiy of Macau
Prof. Byron Choi, Hong Kong Baptist University
Prof. Jianliang Xu, Hong Kong Baptist University
Prof. Reynold Cheng, The University of Hong Kong
Prof. (Ken) Man Lung Yiu, Hong Kong Polytechnic University
Dr. Zhe Li, Alibaba Cloud
Mr. Bojian Zhu, Xidian university (now in Hong Kong Baptist University)
Mr. Rui Zang, Hong Kong Baptist University
Mr. Ye Li, University of Macau
Mr. Weng Hou Tong, University of Macau
Mr. Shivansh Mittal, The University of Hong Kong
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4