1

Download the 1 km² attribute for the German Census 2011 immigrant rate, which has the German name Auslaender_A. Then, download the data for the vacancy rate (German name: Leerstandsquote) using the same raster resolution. Make sure to properly define missing values.
The function for downloading is z11::z11_get_1km_attribute(), and the one for listing all attributes of this size is z11::z11_list_1km_attributes(). There, you can also see both of the layer names.
immigrant_rate_1km <- z11::z11_get_1km_attribute(Auslaender_A)

immigrant_rate_1km[immigrant_rate_1km <= -1] <- NA

vacany_rate_1km <- z11::z11_get_1km_attribute(Leerstandsquote)

vacany_rate_1km[vacany_rate_1km <= -1] <- NA

immigrant_rate_1km
## class       : SpatRaster 
## dimensions  : 867, 641, 1  (nrow, ncol, nlyr)
## resolution  : 1000, 1000  (x, y)
## extent      : 4031500, 4672500, 2684500, 3551500  (xmin, xmax, ymin, ymax)
## coord. ref. : ETRS89-extended / LAEA Europe (EPSG:3035) 
## source(s)   : memory
## name        : Auslaender_A 
## min value   :            0 
## max value   :          100
vacany_rate_1km
## class       : SpatRaster 
## dimensions  : 867, 641, 1  (nrow, ncol, nlyr)
## resolution  : 1000, 1000  (x, y)
## extent      : 4031500, 4672500, 2684500, 3551500  (xmin, xmax, ymin, ymax)
## coord. ref. : ETRS89-extended / LAEA Europe (EPSG:3035) 
## source(s)   : memory
## name        : Leerstandsquote 
## min value   :               0 
## max value   :             100

2

Crop both datasets to the extent of Berlin (or any other big German city if you want to).
You can use the osmdata:getbb() function and the argument format_out = "sf_polygon" to receive a proper polygon for this purpose. Make sure only to extract the multipolygon and convert it to EPSG:3035.
berlin <- osmdata::getbb("Berlin", format_out = "sf_polygon")

berlin <- 
  berlin$multipolygon |>  
  sf::st_transform(3035)

immigrant_rate_1km_berlin <- terra::crop(immigrant_rate_1km, berlin)

vacancy_rate_1km_berlin <- terra::crop(vacany_rate_1km, berlin)

3

That’s an open exercise: How would you try to conduct a basic correlational analysis of both raster data attributes?
You can coerce the whole raster data into a matrix using terra::as.matrix(x)
immigrant_rate_1km_berlin_z <- terra::scale(immigrant_rate_1km_berlin)

vacancy_rate_1km_berlin_z <- terra::scale(vacancy_rate_1km_berlin)

cor.test(
  terra::as.matrix(immigrant_rate_1km_berlin_z), 
  terra::as.matrix(vacancy_rate_1km_berlin_z),
  na.rm = TRUE
) 
## 
##  Pearson's product-moment correlation
## 
## data:  terra::as.matrix(immigrant_rate_1km_berlin_z) and terra::as.matrix(vacancy_rate_1km_berlin_z)
## t = 4.1859, df = 1266, p-value = 3.037e-05
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.06218765 0.17079069
## sample estimates:
##       cor 
## 0.1168384
pseudo_heatmap <- immigrant_rate_1km_berlin_z * vacancy_rate_1km_berlin_z

library(tmap)

tmap_arrange(
  tm_shape(immigrant_rate_1km_berlin_z) +
    tm_raster(palette = "-viridis", title = "") +
    tm_layout(title = "Immigrants"),
  tm_shape(vacancy_rate_1km_berlin_z) +
    tm_raster(palette = "-viridis", title = "") +
    tm_layout(title = "Vacancies"),
  tm_shape(pseudo_heatmap) +
    tm_raster(palette = "-viridis", title = "") +
    tm_layout(title = "Immigrants X Vacancies")
)