Thus far, we have only used Queen neighborhood matrices with our data. Let’s use this exercise to try out different variations. First of all, run the code below to compile the data that were also used in the lecture.

## Reading layer `Stimmbezirk' from data source 
##   `C:\Users\mueller2\a_talks_presentations\gesis-workshop-geospatial-techniques-R-2024\data\Stimmbezirk.shp' using driver `ESRI Shapefile'
## Simple feature collection with 543 features and 14 fields
## Geometry type: MULTIPOLYGON
## Dimension:     XY
## Bounding box:  xmin: 343914.7 ymin: 5632759 xmax: 370674.3 ymax: 5661475
## Projected CRS: ETRS89 / UTM zone 32N
## ℹ Using "','" as decimal and "'.'" as grouping mark. Use `read_delim()` for more control.
## Rows: 949 Columns: 79
## ── Column specification ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────
## Delimiter: ";"
## chr  (3): wahl, ags, gebiet-name
## dbl (71): gebiet-nr, max-schnellmeldungen, anz-schnellmeldungen, A1, A2, A3, A, B, B1, C, D, E, F, D1, F1, D2, F2, D3, F3, D4, F4, D5,...
## num  (1): datum
## lgl  (4): D30, F30, D31, F31
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
voting_districts <-
  sf::st_read("./data/Stimmbezirk.shp") |> 
  dplyr::transmute(Stimmbezirk = as.numeric(nummer)) |> 
  sf::st_transform(3035)

afd_votes <-
  glue::glue(
    "https://www.stadt-koeln.de/wahlen/bundestagswahl/09-2021/praesentation/\\
    Open-Data-Bundestagswahl476.csv"
  ) |> 
  readr::read_csv2() |> 
  dplyr::transmute(Stimmbezirk = `gebiet-nr`, afd_share = (F1 / F) * 100)

election_results <-
  dplyr::left_join(
    voting_districts,
    afd_votes,
    by = "Stimmbezirk"
  )

immigrants_cologne <-
  z11::z11_get_100m_attribute(STAATSANGE_KURZ_2) |> 
  terra::crop(election_results) |> 
  terra::mask(election_results)


inhabitants_cologne <-
  z11::z11_get_100m_attribute(Einwohner) |> 
  terra::crop(election_results) |> 
  terra::mask(election_results)

immigrant_share_cologne <-
  (immigrants_cologne / inhabitants_cologne) * 100

election_results <-
  election_results |> 
  dplyr::mutate(
    immigrant_share = 
      exactextractr::exact_extract(
        immigrant_share_cologne, election_results, 'mean', progress = FALSE
        ),
    inhabitants = 
      exactextractr::exact_extract(
        inhabitants_cologne, election_results, 'mean', progress = FALSE
        )
  )

1

As in the lecture, create a neighborhood (weight) matrix, but this time, do it for Queen and Rook neighborhoods. Also, apply a row normalization.
You could either use the sdep package with its function spdep::poly2nb() or the more modern approach of the sfdep package using the function sfdep::st_contiguity(). In both cases, you have to set the option queen = FALSE for Rook neighborhoods.

2

We have not used them, but you can also create distance-based weight matrices. Use the package of your choice again and create weights for a distance between 0 and 5000 meters. Use again row-normalization.

You must also convert the polygon data to point coordinates for this exercise. We’d propose to use the centroids for this task:

election_results_centroids <- sf::st_centroid(election_results)

Use a map to corroborate this conversion was successful.
If you use spdep, use the function spdep::dnearneigh(); if you use sfdep, use the function sfdep::st_dist_band().

2

Now, let’s see how these different spatial weights perform in an analysis. Calculate Moran’s I and Geary’s C for each one of the weights and report their results for the variable afd_share.
It is essential to which path you have taken before – using spdep and sfdep – as it determines how you solve this exercise.