Classifying crime places by neighborhood visual appearance and police geonarratives: a machine learning approach

Md Amiruzzaman, Andrew Curtis, Ye Zhao, Suphanut Jamonnak, Xinyue Ye

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

The complex interrelationship between the built environment and social problems is often described but frequently lacks the data and analytical framework to explore the potential of such a relationship in different applications. We address this gap using a machine learning (ML) approach to study whether street-level built environment visuals can be used to classify locations with high-crime and lower-crime activities. For training the ML model, spatialized expert narratives are used to label different locations. Semantic categories (e.g., road, sky, greenery, etc.) are extracted from Google Street View (GSV) images of those locations through a deep learning image segmentation algorithm. From these, local visual representatives are generated and used to train the classification model. The model is applied to two cities in the U.S. to predict the locations as being linked to high crime. Results show our model can predict high- and lower-crime areas with high accuracies (above 98% and 95% in first and second test cities, accordingly).

Original languageEnglish (US)
Pages (from-to)813-837
Number of pages25
JournalJournal of Computational Social Science
Volume4
Issue number2
DOIs
StatePublished - Nov 2021
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Transportation
  • Artificial Intelligence

Keywords

  • Geonarrative
  • Machine learning
  • Semantic segmentation
  • Street-view image analysis
  • Urban crime

Fingerprint

Dive into the research topics of 'Classifying crime places by neighborhood visual appearance and police geonarratives: a machine learning approach'. Together they form a unique fingerprint.

Cite this