Monday, May 15, 2017

Raster Modeling

Goals and Objectives:

The goals of this lab were to learn how to do raster analysis, build a sand mining suitability model, build a sand mining risk model, and overlay the results of these two models to find the best locations for sand mining with minimal environmental and community impact.

Suitability for mining:
1. Generate a spatial data layer to meet geologic criteria
2. Generate a spatial data layer to meet land use land cover criteria
3. Generate a spatial data layer to meet distance to railroads criteria
4. Generate a spatial data layer to meet the slope criteria
5. Generate a spatial data layer to meet the water-table depth criteria
6. Combine the five criteria into a suitability index model 
7. Exclude the non-suitable land cover types 

Risk for mining:
1. Generate a spatial data layer to measure impact to streams
2. Generate a spatial data layer to measure impact to prime farmland
3. Generate a spatial data layer to measure impact to residential or populated areas 
4. Generate a spatial data layer to measure impact to schools
5. Generate a spatial data layer to measure impact on local parks
6. Combine the factors into a risk model
7. Examine the results in proximity to prime recreational areas

Datasets and Sources:

Bureau of Transportation Statistics: Rail terminals feature class.

Trempealeau County Land Records: Trempealeau County Geodatabase

Wisconsin Geological and Natural History Survey: Bedrock Geology of Wisconsin, West-Central Sheet.

Methods:

For the suitability model, variables followed a similar workflow. The geology feature had to be converted into a raster before it could be reclassified. The Euclidean Distance tool was applied to the rail terminals feature in order to create rings around them to represent distances from any given point on the map.  Then it was reclassified. Slope was calculated for the DEM and then Block Statistics were applied in order to create an average to smooth out the results. Then the slope was able to be reclassified. Land Cover, and Water Table Depth were able to be reclassified right away.  The categories and reasoning behind the reclassifications will be discussed below.  The final step in creating the suitability model was to use the Raster Calculator to add up all of the rasters.

Figure 1: Model used to create the Mining Suitability raster.  All of the reclassified rasters were added together to produce this.

Normally, all of these processes can be done using model builder.  For some reason, there were issues with model builder while working on this project that limited the tasks it could be used for.  Everything was reclassified before putting it into model builder in order to deliver exactly what was needed for this  portion of the lab.

Figure 2: Reclassification of Geology features.  These two were isolated because they are the only ones that are viable for sand mining.

Figure 3: The first table shows the level of quality of each type of landcover. 1 means there is no chance of mining and 3 means it is a great location for mining.  These were chosen based on how much there would be to clear out in order to mine on the location.  The second table isolates the features that have at least some chance of being mined on.

Figure 4: This table shows the reclassification of the distances to railroad depots in miles.  This was determined on an exponential scale while thinking about convenience factors.

Figure 5: This table shows the slope of the land cover in terms of percentage.  The lower percentages are much more suitable than the higher percentages.

Figure 6: The water table reclassification scheme was determined based on how deep the water table is.  The mining companies prefer the water tables closer to the surface so they don't have to drill so far.


The next portion of this lab examined the criteria for sand mining impacts on the community and environment. This involved looking at rivers, farmland, residential areas, schools, and parks. For each of the criteria, they were broken into 3 ranks (3= high risk, 2= moderate risk, 1= low risk).  Euclidean distances were calculated for all these features because distance is a main factor in the impact mines have on them.

Figure 7: This is the model used to calculate the Risk Index.  Similar to above, all of the reclassified rasters were added together to produce this.

Figure 8: The first table was how the streams to include were chosen.  They were first ranked as to how important they are.  When viewed on a map, they were narrowed down to only include the primary streams that always have water flowing. The second table is the reclassification based on the Euclidean distance tool.  The areas that are closest to the rivers have the highest risk associated with mining near them.

Figure 9:  This shows the reclassification scheme for farmland.  The ones that are the highest risk include those that are the best farmlands.  The only one that is not a risk at all is the land that is not viable to be farmed in any circumstance.

Figure 10: This shows the risk when reaching the proximity of the noise/dust buffer around residential areas.  First a buffer of 640 meters was placed around all residential areas.  Then Euclidean distance was run to create these zones of risk.

Figure 11: This shows the risk factor near schools.  The closer the mines are to schools, the more dangerous they become for the kids.

Figure 12:  This shows the risk factor near parks.  People will not want to use parks as much if there is a ton of pollution from these sand mines, so it is important to avoid them whenever possible.

Results:

Figure 13: Suitability Index map. This map shows where the land resources are most suitable for mining use.
The following maps were all added together to create this index:




Figure 14: Risk Index. This shows where the highest and lowest risk areas are in regard to the impact mining will have on the community and environment.  The following maps were added together to create this:




Figure 15: This was a chance to use the Viewshed tool to see what areas are visible from a given point.  This can be used to see the visual impact these mines will have on tourist attractions in an area.


Figure 16: This is the model used to execute the Viewshed tool.



Figure 17:  This is the final map that combines both the Suitability Model and the Risk Model.  This was created by reclassifying them so they lined up properly with suitable land and low risk.  The green areas show the most ideal locations for these mines as they are on suitable land and have minimal risk associated with them.

Figure 18: This is the model used to create the final map.  The two reclassified rasters were added together in Raster Calculator.

Discussion:

Knowing these results are important because sand mining is a growing industry in Wisconsin. It is important to keep in mind all of these factors when deciding where to put a new mine because there are a lot of implications associated with them.  If done right, sand mining can be beneficial.  If done wrong, more problems will be created than are necessary, and the entire industry will get a much worse reputation.  These results can be very helpful to companies looking for places to start new mines, however these processes would be much more effective if experts in multiple fields were collaborating to create the most well informed classifications.  The classifications for this assignment were done with best judgment in mind, but there is always a chance of error.

Conclusions:

Throughout this lab, it became very evident how complex the issue of sand mining really is.  It is extremely difficult to be able to keep everything in mind, especially when it may not impact somebody directly.  Overall, it is important that as many factors be considered as possible when working with sensitive topics, such as this one.  If everyone works together, there will be a way to solve these problems.  Using GIS to help compile these aspects into one place is just one of many useful ways of handling these types of issues.











Thursday, April 13, 2017

Network Analysis

Goal and Objectives:

The primary goal of this assignment is to perform network analysis to find the closest route from the sand mines to rail stations for transportation of the sand. The purpose was to use python to query out mines that are active and do not have a rail station on site.  The mines also should not be within 1.5km of a rail line because the closer ones likely have rail spurs connecting them.  Another goal of this assignment is to gain experience using model builder to use the closest facility solver and build a model to calculate the closest facility route.  The final goal is to create an equation to calculate a hypothetical cost of sand truck travel on roads by county.

A White Paper from the National Center for Freight and Infrastructure Research and Education (2013) provides some important context for this study.  It notes that a common concern among communities where sand mines are located is road damage.  This is due to big and heavy vehicles frequently traveling along local roads as a part of the operations of the mine. In a table presented in the article, it notes road damage as a significant impact in all types of sand mining operations.  It explains a case study done in Chippewa County, WI, where sand mining impacts on roads is a major local concern. It describes how different types of roads (state, local, etc.) carry different regulations. In many cases, negotiations occur in order to determine appropriate reimbursement terms between the mine operators and the counties involved. 

The mine data set used in this study is the official, most recent set of mines from the DNR.  The street map used as the source for the Network Dataset came from ESRI.  The rail terminal locations were provided in the geodatabase for this lab, but they likely came from the Department of Transportation. In this study, hypothetical numbers are used when examining the number of trips and cost for each county.


Methods:

The first portion of this lab involved using python scripting to extract the mines where trucking routes will need to be examined.  This includes mines that are active, do not have a rail loading station on-site, and are not within 1.5km of a rail station.  The purpose of this was to determine the starting points for the route planning in the cases where trucks would need to be used to transport sand.  The scrip used to execute this process can be found in the Python Scripts blog post below.

To execute the rest of this project, model builder was used (see Figure 1).  The workflow will be explained below.


Figure 1: Model used to execute this project.
The first step in this network analysis is to create a closest facility layer.  This is done by making the streets the input layer.  The add locations tool is then used to load the final mines that were extracted by the python code as the incidents, and then run again to add the rail terminals as the facilities.  The solve tool is added to actually run the network analysis to determine the best routes.  In order to export the resulting routes as a feature class, the select data tool is used.  In this tool, the child data element used is Routes.  The copy features tool is then used in order to save the selected features in the geodatabase. 

The next portion of this model is intended to calculate the total road length for the routes by county.  First, in order to clean up the data frame a little bit, the counties, rail terminals, and roads layers are all clipped to Wisconsin and Minnesota.  Minnesota is included because some of the routes found it most efficient to take the sand to a terminal in Minnesota.  The Identity tool is then used in order to connect the county names with route information.  In order for proper analysis to be conducted, the layers are projected.  This ensures that the distortion would be minimized in the final map (see Figure 2), as well as allows for proper measurements of the route lengths.  To find out the total distance of roads in each county, the Summary Statistics tool is used.  This provides a summary of the distances in feet, however it is more relevant to see the distances in miles so the Add Field and Calculate Field tools are used to display the distance in miles.  This involves an equation that divides the distance value by 5280.  To establish the estimated cost for each county, another field is added to the table and an the following equation is used to determine the hypothetical cost: 
Cost$=(((miles)*2.2)/100)(trips).  In the case of this study, it is assumed that 50 trips are taken each year to the mines as well as 50 trips back. The resulting table can be seen below (see Figure 3).

Results and Discussion:

The following map shows the most efficient truck routes from the relevant mines to the nearest rail facility.

Figure 2: Map of routes
This shows that in many cases there are multiple mines going to the same rail terminal.  This can have a huge impact on the roads along that route due to the overlap as well as the number of trips being taken from each mine.  It also shows that Minnesota is impacted as well as Wisconsin because, in some cases, the closest rail depot is actually in Minnesota.  This can create several more complicated issues because the bulk of the business is occurring in Wisconsin, while it ends up costing Minnesota as well in terms of road repair needs.

Figure 3: Cost per county

This table lists the costs (in USD) of the mining industry on each county the routes travel through.  As one can see, there are many instances where a route travels through a county where there is no actual mining activity.  In these cases, the counties are impacted and may not receive proper compensation for their roads. 

Conclusions:

It is very evident that mining takes a toll on the communities surrounding the operations. One major concern is the impact it has on road conditions in the cases where sand must be transported to rail stations by heavy trucks.  This study examined the most efficient routes to rail stations from mines that are not near a rail station.  In many cases, one mine alone has a huge impact of multiple counties and even multiple states.  When this is considered in the grand scheme of things, the sand mining industry ends up having a much larger impact than is initially noticed on the local levels.

Sources:

White Paper on Frac Sand Mining

ESRI

DNR

DOT

Friday, April 7, 2017

Geocoding

Goals and Objectives:

The goals of this assignment are to learn how to geocode address and PLSS locations and be able to compare them to the work done by others.  In the context of this sand mining project, 19 mine locations were provided and the goal was to geocode them and compare them to the same mines geocoded by classmates.  This provides an opportunity to analyze potential errors.  In order for the geocoding to be done, the data had to be normalized.

Methods:

The first step in the process of geocoding was to ensure that the data was normalized.  This meant manipulating the given datasets in Microsoft Excel so that each record would be in the same format.  Essentially, this split the different parts of the addresses up in a way that the geocoding tools could use to locate the desired locations.

The geocoding portion itself required two different processes. One way was used when the actual addresses were provided.  The other way was used when only PLSS information was available and the locations had to be found with a much more manual process.  When addresses were provided, the "Geocode Addresses" function could be used.  This would take the information from the normalized table and generate a list of options of potential locations of where the address could be.  I would then go and zoom to these locations and select the one that appeared most accurate using the imagery base map as well as Google Maps satellite view as a reference. 

When the physical address information was not available, the PLSS coordinates were used to locate the sand mines.  This was a much more manual process and required the use of the imagery base map, as well as layers displaying PLSS sections and townships in order to get a general idea of where these mines are.  When the mine was located, the "Geocode Addresses" function was used to mark the location for that point. In many cases, the mines that had addresses attached also included PLSS information.  When this was the case, the PLSS information was used to verify the accuracy of the address information.  When all 19 mines were geocoded, a shapefile was created to share with the rest of the class.

The next portion of the assignment was to compare my geocoded mines with the same mines that other people in the class also geocoded.  In order to do this, all of the shapefiles were brought into ArcMap and then the merge tool was used to combine them all into one.  Before doing anything else, since this step required measuring distances, I made sure that all of my data was projected using the same projection.  Using the merged layer, I was able to query out the 19 mines that I also geocoded and then selected a sample of 5 mines that had at least 2 other people to compare my work to. I then used the "Point Distance" tool to measure the distance between the point I thought was the location of the mine with the locations 2 of my classmates thought were the correct locations.  A table was then generated with these results.  The same process was then done to compare the same 5 mines with the true locations provided by data from the DNR.

Results:

Table 1:  The original location data that has not been normalized.  It is a mixture of actual addresses and PLSS locations all in one field.
Table 2: Locational information after it has been normalized.  This involved splitting the given information up in ways that that the geocoding function would be able to decipher.
Figure 1:  This is a map of the 19 sand mine locations that I geocoded.
Table 3:  This table shows the distance in feet from the location I placed each mine with the location 2 other people placed the same mine.  The 2 other people's locations received the "a/b" label in the table in order to differentiate the same intended mine with different people's locations.  The very large distances in some cases and high standard deviation shows that there were a few instances were the location I thought was correct and that of my classmates were at different mines all together.  In other instances, the point would be at the same mine, just a different entrance, thus causing a discrepancy.
Table 4: This table shows the distance between my point and the truth point provided by the DNR.  In many cases with this I noticed that my points were at the entrance to the mine whereas the DNR points would be inside the mines themselves.  Overall, the mean and standard deviation are relatively small distances showing there wasn't a huge amount of discrepancies.
Figure 2: This map shows an example of a difference in my data versus that of the DNR.  The points are clearly marking the same mine, however the actual placement of the points is different.  This was a common error that I noticed throughout my dataset.

Discussion:

While the geocoding process is generally reliable, it is impossible to be completely free of errors. This can be seen first of all through the differences in each person's points as noted in the tables and figures above.

Throughout this process there are both inherent and operational errors present.  Inherent errors occur due to the nature of how geographic data is represented.  This occurs when projecting the round earth onto a flat surface.  In this case, that could have an impact when trying to measure the distances between points.  Another way this could have an impact is when trying to match a point with the imagery base map because the base map could be outdated.  I noticed significant differences in the imagery when viewing it at different extents.  When the data was originally collected, there could be an inherent error depending on the equipment used for the collection purposes as well.

Operational errors occur due to human nature.  The differences in where points were placed on the imagery could be due to people interpreting the base map image differently.  It could also be due to people working at different scales when placing points.   There could also have been an operational error made when collecting the data in the first place.  It is nearly impossible to avoid these errors completely.

It is difficult to ultimately know which points are correct and which ones are not.  The best way to ensure accuracy is to actually go to the locations and verify the point.  However, this is not always possible.  In this case, the most feasible way to ensure data accuracy would be to compare as many different people's geocoded points as possible.  Even while doing that, however, it is impossible to ensure the data is completely correct without physically going to the locations and collecting the raw data.

Conclusions:

Overall, geocoding is a good way to create data points from addresses given.  While it may not be a perfect way to get data points every time, it is certainly a way to save time instead of going to each location in order to get a point.  This is not without limitations, however.  There will always be some risk of error when doing this process.  When examining the data for errors, while it would be nice to be able to check each point, that is not always possible.  In most cases a sample of the data will be examined, as was the case in this lab.  Future studies may want to check more locations than just the sample, as well as comparing the locations to a larger sample of other people's work.


Monday, March 13, 2017

Data Gathering

Goals and Objectives:

The goal of this lab is to learn how to find and download data from a variety of sources online, manipulate and join it using ArcGIS, project it into one coordinate system, and build and design a geodatabase to store the data.  It is also intended to provide more exposure to python coding in order to automate geoprocessing of data.

Methods:

The basic workflow of downloading each data set for this lab began by downloading zip files into a temporary directory so the large files can be easily deleted later.  They were then extracted into a working folder.  The data sets were projected and clipped/extracted and then loaded into a geodatabase.

The process began by finding and downloading several data sets.  The first one was the Polyline Railway Network file from the US Department of Transportation.  Next, the land cover for the state of Wisconsin (NLCD 2011 Land Cover) was downloaded via the USGS National Map Viewer.  The same website was used to download the elevation data sets (1/3 arc-second DEM).  This required downloading both n44w092_13 and n45w092_13 in order to get data for all of Trempealeau County. From the USDA Geospatial Data Gateway, the Wisconsin Cropland Land Cover was downloaded. From the Trempealeau County Land Records division website, the entire Trempealeau County file geodatabase was downloaded.  Finally, from the USDA NRCS Web Soil Survey website, the Trempealeau County soils data was downloaded.

The soils data was downloaded as a Microsoft Access geodatabase.  In order to be able to properly use this data, it was manipulated with Microsoft Access by importing the .txt files into the geodatabase schema.  This connected the actual data to the geodatabase template file.  In ArcCatalog, the soils shapefile was then imported into the TMP geodatabase. The component table was also imported into the TMP geodatabase. A relationship class was created to join the component information to the new soils feature class. Both of these were added to ArcMap and joined based on the relationship class.  The NTAD rail lines shapefile was added to the map, clipped to the Trempealeau County boundary, and added to the TMP geodatabase.  This projected the data into the proper coordinate system. The DEMs were then added to the map and combined using the Mosaic to New Raster tool.

The 3 raster datasets that were downloaded, along with the TMP geodatabase were moved to their own separate folder.  Then Pyscripter was used to create a python script that projected the rasters, extracted them to the TMP county boundary, and loaded the .tifs into the geodatabase. The code that was created to do this can be seen in the Python Scripts post below.  The results of this script can be seen in Figure 1 below.

Figure 1: The resulting layers that were created from using Python to project and extract the 3 raster datasets.
Legends for Land Cover and Crop Cover were omitted intentionally due to lack of space available. Information about the significance of these values can be found at the source websites linked above. 

Data Accuracy:

Figure 2:  Analysis of the metadata for each dataset downloaded for this lab.



Conclusions:

In general, the data seemed to come from reputable sources.  It will be important to keep in mind the scale when conducting further analyses in order to preserve data integrity.  Many of the datasets only provided a few of the data quality parameters so the others were estimated based on what was given. This could be a potential concern relating to the datasets used.  It was especially a concern when the scale was estimated based on the resolution values. While the sources seemed reputable, the often incomplete metadata leaves a level of uncertainty when it comes to the quality of the data. It was especially concerning when metadata did not include information about the various different accuracy levels, or the accuracy levels were left to be inferred by other information.  The USDA soils dataset seemed to have the most complete metadata available.  It was difficult to evaluate the TMP geodatabase because there was very little metadata for the geodatabase as a whole.  The majority of information available is for individual feature classes and many of these feature classes have very different metadata associated with them.  Overall, it would be a lot better if the metadata were to explicitly state these data quality measures instead of relying on users inferring and estimating them.  This would improve confidence levels greatly. 

Sunday, March 12, 2017

Python Scripting

Python scripting is used to automate GIS processes. It is done in order to be able to process large amounts of data more efficiently to reduce overhead.  Once proficient in the scripting language, one can execute processes much faster than running them all manually.

#-------------------------------------------------------------------------------
# Name:        Exercise 5: Data Gathering
#
# Author:      Kevin Trushenski
#
# Created:     08/03/2017
#
# Purpose: To learn how to write a python script to project, clip, and load data into a geodatabase.
#-------------------------------------------------------------------------------

#import python module and spatial analyst

import arcpy
from arcpy import env
from arcpy.sa import *

#check out the spatial analyst extension
arcpy.CheckOutExtension("spatial")

#set environment settings
arcpy.env.workspace = "Q:\StudentCoursework\CHupy\GEOG.337.001.2175\TRUSHEKL\Ex5\Data"
arcpy.env.overwriteOutput = True
print "{}" .format(env.workspace)

#get list of rasters from workspace
rs_list = arcpy.ListRasters()
for raster in rs_list:
    print(raster)


#loop through the rasters in the loop
for raster in rs_list:
    #define the outputs
    rasterOut = "{}_Out.tif".format(raster)
    rasterExtract = "{}_Extract.tif".format(raster)

    #project the rasters
    arcpy.ProjectRaster_management(raster, rasterOut, "Q:\StudentCoursework\CHupy\GEOG.337.001.2175\TRUSHEKL\Ex5\Data\TrempWebDATA.gdb\Boundaries\County_Boundary")

    #extract the raster and copy the raster into the geodatabase
    outExtractByMask = ExtractByMask(rasterOut, "Q:\StudentCoursework\CHupy\GEOG.337.001.2175\TRUSHEKL\Ex5\Data\TrempWebDATA.gdb\Boundaries\County_Boundary")
    outExtractByMask.save(rasterExtract)
    arcpy.RasterToGeodatabase_conversion(rasterExtract, "Q:\StudentCoursework\CHupy\GEOG.337.001.2175\TRUSHEKL\Ex5\Data\TrempWebDATA.gdb")
    print "Raster to Geodatabase Conversion {} Successful".format(rasterExtract)


print "The script is complete"



#-------------------------------------------------------------------------------
# Name:      Ex 7 Network Analysis
# Purpose: To create a script that will select active mines that don't have a rail loading station on-site.  It also eliminates mines within 1.5 km of a rail because it is likely a rail spur has already been added.
#
# Author:      Kevin Trushenski
#
# Created:     10/04/2017
#-------------------------------------------------------------------------------

#import system modules
import arcpy
#set environments
from arcpy import env
env.workspace = "Q:\StudentCoursework\CHupy\GEOG.337.001.2175\TRUSHEKL\ex7\ex7.gdb"
arcpy.env.overwriteOutput = "true"

#Set variables

all = "all_mines"
active= "active_mines"
mines = "Status_mine"
norail = "mines_norail"
wi= "wi"
rail = "rails_wtm"
worail = "mines_norail_final"

#Set up the field delimiters for the SQL statements

field1= arcpy.AddFieldDelimiters(all, "Site_Statu")
field2= arcpy.AddFieldDelimiters(all, "Facility_T")

#SQL statement to select active mines

activeSQL = field1 + "=" + "'Active'"

#SQL statement for field facility_T LIKE mine

mineSQL = field2 + "LIKE" + "'%Mine%'"

#SQL statement for field facility_T NOT rail

norailSQL = "NOT" + field2 + "LIKE" + "'%Rail%'"

#Make a Layer from the feature class with mine status = active

arcpy.MakeFeatureLayer_management(all, active, activeSQL)

#Make a layer from the feature class with facility type = mine

arcpy.MakeFeatureLayer_management(active, mines, mineSQL)

#Make a layer from the feature class without rails

arcpy.MakeFeatureLayer_management(mines, norail, norailSQL)

#Select

arcpy.SelectLayerByLocation_management(norail, "INTERSECT", wi)
arcpy.SelectLayerByLocation_management(norail, "WITHIN_A_DISTANCE", rail, "1.5 KILOMETER", "REMOVE_FROM_SELECTION")

arcpy.CopyFeatures_management(norail, worail)


print "The script is complete."


#-------------------------------------------------------------------------------
# Name:  Exercise 8: Raster Analysis Python
# Purpose: To write a python script to generate a weighted index model in order to place more emphasis on a certain variable in the Risk Model from Exercise 8
#
# Author:      Kevin L Trushenski
#
# Created:     16/05/2017
#-------------------------------------------------------------------------------
#import system modules
import arcpy

#set environments
from arcpy import env
env.workspace = "Q:\StudentCoursework\CHupy\GEOG.337.001.2175\TRUSHEKL\Ex5\Data\TrempWebDATA.gdb"
arcpy.env.overwriteOutput = "true"
arcpy.CheckOutExtension("Spatial")

#Create variables
Parks= arcpy.Raster("parksReclass")
Res= arcpy.Raster("res_reclass")
River= arcpy.Raster("river_clip_reclass")
Schools= arcpy.Raster("schoolReclass")
Farm= arcpy.Raster("farm_reclass")

#Weight most important variable
outweight= (Res*1.5)

#Set up equation for weighted index model
Weighted= (outweight + Parks + River +Schools + Farm)

#Save to geodatabase
Weighted.save("weighted_result")

print "The Script is Complete."


Friday, March 3, 2017

Sand Mining in Western Wisconsin Overview

Frac sand mining is an industry that has been present in Wisconsin for over 100 years.  There is a huge demand for the sand because it is used for hydraulic fracturing, or "fracking", as well as some manufacturing.   The sand that is sought after is quartz and has to have a very specific grain size and shape.  According to the Wisconsin Geological and Natural History Survey, frack sand must be "nearly pure quartz, very well rounded, extremely hard, and of uniform size". 

The sand can be found in certain sandstone formations in western and central Wisconsin (see Figure 1).

Figure 1: Locations of frac sand mining sites in Wisconsin,
Courtesy of the Wisconsin Geological and natural history Survey


Frac sand mining has been very controversial due to the implications that are associated with it.  In western Wisconsin, there are issues surrounding what mining will do to the natural environment, as well as to people that live near the sites.  The Wisconsin DNR has had to invoke several regulations to ensure the protection of natural resources while still allowing the sand mining industry to flourish.  According to the DNR, "Industrial sand mines and other related operations must follow the same state requirements to protect public health and the environment as other nonmetallic mining operations in Wisconsin".  They go on to note that the permits for sand mining carry regulations pertaining to storm water, air quality, wetlands, high capacity wells, solid/hazardous waste, drinking water, and endangered/threatened species.  

Whenever a new mine is planned to be opened, there is often a lot of controversy surrounding it, especially for the community in which the mine will be near.  One recent example of this is with the controversial sand mining company Pattison Sand Co.  They plan to expand their underground mining operation in Clayton County, Iowa.  According to an article in Urban Milwaukee from May, 2016, since 2005 this company has "racked up more workplace violations than any other industrial sand mine in the United States".

One tool that can assist in exploring the issues surrounding sand mining is Geographic Information Systems (GIS). Through using this tool, one can analyze the spatial trends associated with fracking.  This will be helpful when examining the regulations, as well as being able to look at geological information. It can be a tool to help give a visual perspective to some of these issues in order to potentially help find a solution.

Works Cited: