Tuesday, October 25, 2016

Field Activity 6: Distance Azimuth Survey

Introduction
                An accurate and sufficient survey of an area can sometimes be accomplished using a grid based approach, but this method is not always ideal. Often the survey area is too large to effectively create an accurate grid for spatial sampling. An effective alternative is the use of a GPS receiver in conjunction with a total station, a surveying instrument containing an electronic distance meter as well as an electronic theodolite. A total station is capable of very accurate horizontal and vertical angles, sloping distance, as well as calculation of coordinates of an unknown point from a known point. While this combination of technology is a very accurate and efficient method to survey large areas, it is expensive and not always reliable. The price of a total station starts at $3,500.00 and a GPS is also usually needed to determine at least one known coordinate. If a person is fortunate enough to have access to such equipment, it should not be completely relied upon. Technology tends to fail on occasion and it is important to have an alternative survey method. The distance and azimuth method is one such alternative that can be done using less complex equipment. This method requires at least one known coordinate point and data is collected in relation to the known point; this is known as implicit data. With known coordinates of one point, the distance azimuth method can be achieved using only a compass and a measuring tape. Accuracy and efficiency does however improve as quality and availability of equipment increases.

Methods
                This lab was performed in order to gain experience using the distance azimuth survey method. The study area was a section of Putnam Park, on the UWEC campus (figure 1). A recreational grade, Bad Elf brand GPS was used to determine the coordinates of three known points within the study area. The implicit data gathered in this survey included azimuth and distance; from each given point groups used a lensatic compass to determine azimuth and a TruPulse laser rangefinder to find distance in meters to various trees in relatively close proximity. Other attribute data included tree species and diameter at breast height (DBH). To efficiently collect data while also allowing for each group member to utilize the equipment, the group delegated tasks and rotated between them. One group member selected a tree and measured DBH, one found azimuth, another measured distance, another recorded the data, and tree species identification was a group effort.
 In order to compile data sets from all three coordinate points the class created a shared excel spreadsheet in which all groups could enter their collected data. The appropriate fields within the spreadsheet were set to a numerical data format and the table saved as a csv file to facilitate the import into ArcMap. To begin mapping the survey in ArcMap a new geodatabase was created and the table was imported into it. The table was then added to a map document as XY data and then converted into a point feature class using the WGS84 projection in order to coincide with the GPS coordinate values. When displayed over a world imagery base map, two of the three points were not located in their expected locations. Point three was located near interstate 94 and point two was about 30 yards north of its actual location. The points were corrected using the base map for reference and editor to move the points to a more accurate positions. In order to map all data collected, data management tools found in Arc Toolbox were used. First, the Bearing Distance to Line command was used to create lines from each origin to the surveyed trees (figure 2) and then the Feature Vertices to Points command was used to create points at the vertices of the lines created. The resulting points were then symbolized based on tree species (figure 3). Finally, the created features were used to compile a map layout depicting all three study areas.

Figure 1. Location of study area
Figure 2. Result of Bearing Distance to Line command
Figure 3. Result of Feature Vertices to Points command



Results/Discussion
                A few issues to were encountered during the initial import of data into ArcMap. After creating a point feature class of the origin points, a world imagery base map was added and zoomed into the study area`s location, but the points were not visible within the study area. The zoom to layer feature of the point feature class revealed the points to be located off the west coast of Africa, a fellow classmate`s further investigation indicated the X and Y values had been transposed. After this error was corrected, the remaining inaccuracies could most likely be explained by a GPS error. The study area was at the base of a large ridge to the south, which most likely interfered with the GPS. Previous groups participating in a similar lab used Google Earth to determine origin points and also encountered the trivial errors found in this lab after the X and Y coordinates had been transposed to represent the correct values collected. The error was corrected with the same level of ease as in this lab, by simply editing the point locations before running the other data management tools.
The features created from the data management tools were compiled and displayed over a world topographic base map to create a final map layout depicting the origin of each survey within the study area as well as the distance and species of each tree surveyed (figure 4). The data was also used to create a graduated symbols map based on DBH of the trees (figure 5). The final results are quite accurate, but point three is slightly south of its actual location and some of the trees appear to be in the middle of the road. This could have been more accurate if the initial points had been edited using the world topographic or the streets base map instead of the world imagery base map.

Figure 4. Layout depicting tree species

Figure 5. Layout depicting tree DBH




Conclusion

                In real life surveying situations, often the sample area is too large for a grid based survey. In these instances, surveyors often utilize expensive, high-tech equipment; such as total stations. However, not all surveyors have access to this equipment and when it is available it may fail to function properly. The distance azimuth survey proves to be an acceptable alternative approach when the study area is too large for a grid based sampling method and resources are limited. As seen technical reports from previous classes, even when beginning a survey without known coordinates the distance azimuth method can still be utilized. This just requires one be able to locate their origin(s) on Google Earth or some other application in which the coordinates of the location can be extracted. 

Tuesday, October 18, 2016

Field Activity 5: Sandbox Survey Part 2, Visualizing and Refining a Terrain Survey

Introduction
                Prior to this lab each group was to create a terrain inside of a sandbox of about one square meter (Figure 1), and then tasked to come up with a sampling scheme and technique for the creation of a digital elevation surface. Due to the small size of the study area, the grid scheme consisted of 6 X 6 cm squares and systematic point sampling was used to gather elevation data from the X, Y intersections of each square. In cases of sharp change in elevation within one square two measurements were taken in one square, giving the X or Y a 0.5 decimal place in the coordinating location. Data was manually recorded on a hand drawn grid matching that of the one created over the sandbox. This method organized the data and assisted in data normalization as it was entered into an excel spreadsheet. Data normalization involves the input of data into a database so that all related data is stored together, without any redundancy. Data for this excel file was set to numeric, double to indicate the decimal values and keep all data in the same format. X, Y, and Z data for each grid was entered beginning at the first X column from bottom to top and so forth, this pattern eliminated redundancy and kept all data inside one excel file. In total; 434 data points were collected in a relatively uniform pattern throughout the entire study area, with some clustering of points in areas with rapid change in elevation. These data points can be imported into ArcGIS or Arc Scene, where interpolation can mathematically create a visual of the entire terrain based on the know values.

Figure 1


Methods
                To bring this process a geodatabase was created and the excel file, which has been set to a numerical data format, is imported into the database, brought into ArcMap as X, Y data, and then converted into a point feature class. After a point feature class has been made the data can be used to create a continuous surface using interpolation methods. Four raster interpolations and one done in a vector environment are experimented with to determine the ideal representation of the terrain. The four raster operations are: Inverse Distance Weighted (IDW), natural neighbors, kriging, and spline. The interpolation done in a vector environment is called TIN.
IDW and spline methods are deterministic methods, they are based directly on a set of surrounding points or mathematical calculation to produce a smooth continuous surface. IDW interpolation averages the values of known points in the area of the cells being calculated to give them values based on proximity to the know values; cells closer to known points are more influenced by the average than those farther away. Influence of known points can be controlled by defining power, a larger power will place more emphasis on near points and result in a more detailed map, while a lower power will place emphasis on distant points as well to produce a smoother map. Produced values can also be varied by manipulation of the search radius to control for distance of points from the cell being calculated. The IDW interpolation method would not be advantageous for random sampling methods, areas with a higher density of samples would be more well represented than areas with fewer points. Spline uses a mathematical function and the resulting surface passes exactly through each sample point, making it ideal an ideal method for large samples. The function used creates a smooth transition from one point to the next and is ideal for interpolation of gradually changing surfaces such as; elevation, rain fall levels, and pollution concentration. Since this method passes through each sample point to create the resulting surface, it is not ideal for sampling techniques that result in clustered areas or fewer sample points in proportion to the entire area, such as random and/or stratified.
Kriging is a multi-step, geostatistical method of interpolation that creates a prediction of a surface based on sample points and also produce a measure of accuracy of those predictions. It assumes a correlation between points based on distance and direction between them. Kriging is similar to IDW in the fact that it weights points in a certain vicinity but it goes beyond IDW and takes spatial arrangement of the sample points into account, in a process called variography. While kriging goes a step beyond IDW to create a more accurate portrayal of a surface, like IDW it is not an advantageous method for sampling techniques that have resulted in clusters of sample points, such as random sampling.
Natural neighbor interpolation utilizes a subdivision of known points closest to the unknown point to interpolate and does not deduce patterns from the subdivision data. Hence if the entered sample does not indicate the pattern of a ridge, valley, etc.; it will not create this feature. Natural neighbor interpolation is advantageous for samples that have been collected using a stratified sampling method due to subdivisions within the study area.  However random sampling may result in poor representation of a specific surface and thus natural neighbor would produce a poor representation of the surface as well.
TIN, or triangulated irregular network, uses an algorithm to select sample points in the form triangles and then creates circles around each triangle throughout the entire surface. The intersections of these are used to create a series of the smallest possible, non-overlapping triangles. The triangle`s edges cause discontinuous slopes throughout the produced surface, which may result in a jagged appearance and TIN is not an advisable interpolation method for areas away from sample points.
Given the comprehensive and uniform collection of sample points in the terrain, each of these methods should produce a relatively accurate portrayal of the terrain. However, the areas with more densely collected data may result in over-representation in those areas in all interpolation methods except spline. Because of the fact that spline interpolation passes the surface though each data point and produces a smoothed surface in-between the points. Further reading in ArcGIS help indicates that spline is also ideal for areas with large numbers of sample points and gradually changing surfaces, like elevation. This indicated it would be the most ideal method for the survey.

Results/Discussion
                Prior to this activity, it was necessary to create a sandbox terrain and spatially sample it for elevation values. The entire terrain was slightly larger than one square meter and a systematic point sampling of the entire terrain was feasible. Systematic sampling done within an X, Y plane consisting of 361, 62 centimeters, quadrates resulted in an excel spreadsheet with 434 data points (Link to excel file). After the excel file was imported into ArcGIS as X, Y data and converted into a point feature class (Figure 2A), it produced a collection of sample points that were uniform throughout most of the terrain, with areas of sharp change in elevation more densely sampled. Each of the five interpolation methods described above were applied to the feature class to create different continuous elevation surfaces and each analyzed for best fit for the survey.
For each method, the elevation from surfaces option was set to floating on a custom surface with the custom surface being set to the corresponding surface being interpolated and each was set to shade areal features relative to the scene`s light position. Furthermore, other than the TIN method, each scene was set to a stretched symbology with the same color ramp. The TIN method only offers elevation symbology and did not offer the same color ramp.  All First, IDW resulted in elevation changes that were relatively smooth and accurate but the image appeared “dimply”, similar to that of a zoomed in view of a golf ball in almost the entire image (Figure 3A). Kriging also accurately represented elevation, but it appeared very geometric. It would most likely appear similar to a fractal if each shape within the image were colored in differently (Figure 3B). The natural neighbor method is smooth in most places, but edges are rough and smaller elevation changes are less pronounced (Figure 3C). TIN interpolation produced a very detailed image, however it is very “blocky” with angular curvature instead of a smooth surface appearance (Figure 3D). Finally, the spline interpolation method produced smooth elevation transitions, with the most pronounced representation of different elevations throughout the terrain. Based on these results, the spline method appeared to be the interpolation most appropriate for the survey taken (Figure 2B).
Before applying the interpolation methods and having only researched sampling techniques and some about the first geography law; it seemed the survey performed may have been slightly excessive. However, on the other hand, a larger survey is ideal and the time taken to perform the survey was not all that long. After experimenting with and reading about each interpolation method, it can be concluded that the sampling technique and grid scheme was not too excessive. The results from each interpolation method captured most details of the original terrain, and it was clear what was being portrayed in each image.

Figure 2


Figure 3







Summary/Conclusions

                Field based surveys collect essential spatial data needed to determine the relative data of unknown areas within proximity to the collected data, in order to establish an acceptable representation of the entire study area. This activity illustrated the basics of the survey process on a much smaller scale than usual. Like this survey, surveyors need to decide which sampling technique and tools are most suitable for the task. However, the tools used in most field-based studies are go beyond string and thumbtacks, some of the tools used include GPS receivers, total stations, 3-D scanners, and UAVs. In larger survey areas, surveyors must also consider temporal aspects. If a survey is to be done as a follow up to a previous one, the surveyors need to consider whether it should be done in the same temporal situation as the previous survey. Alternatively, if a new survey is to be conducted, the surveyors need to determine the best temporal situation for the survey. The detail given to this small scale survey is not always a feasible option, resulting in a different collection of sample points. This is where the other interpolations would yield a more accurate representation of the survey in question.

                Interpolation is a useful tool for visualization of many things beyond elevation. Other gradually changing phenomena’s such as water table levels and rain fall can be interpolated or demographic data, such as a survey of HIV distribution in Africa. Ecological surveys, such as forest type could also be interpolated. Interpolation methods are crucial in many fields for the extraction of important information otherwise extremely difficult or impossible to survey completely.

Tuesday, October 11, 2016

Field Activity 4:Sandbox Survery;Creation of a Digital Elevation Surface using critical thinking skills and improvised survey techniques

Introduction
Waldo Tobler`s First Law of Geography states that; “Everything is related to everything else, but near things are more related than distant things” (Esri GIS Dictionary). This statement is the basis of sampling; data collection of a representative population or sample area used to investigate a whole population. Well-chosen spatial samples can be used to create an acceptable description of the Earth`s surface. There are many factors that must be taken into consideration in order to create a well-chosen spatial sample. Some of these considerations include sample size; larger samples tend to be more representative of the whole, how to minimize bias when sampling, and which sampling technique is most appropriate for the area being sampled. A common scheme for spatial sampling is based on points within a grid framework and there are three primary sampling techniques; random, systematic and stratified. Each sampling technique has benefits and disadvantages. Both random and systematic sampling can be sub-classified into point, line and area sampling. Point sampling involves data collection at x, y intersections or in the center of the grid correlating to an x, y intersection, line sampling involves data collection at points along a line, and area sampling involves data collection within grid squares. Random sampling removes bias but can result in a poor representative population due to clustering of sample points. Systematic sampling involves evenly distributed points however, bias can lead to over or under-representation of a certain pattern. Some study areas have a know proportion of specific subdivisions, in these cases stratified sampling would be the best technique. Stratified sampling would evenly distribute sample points taken in proportion to each subdivision and the samples taken in each subdivision could be taken randomly or systematically. The goal of this activity is to create a terrain in an approximately one square meter area and then determine the most fitting sampling scheme and technique to obtain sufficient data for the creation of an accurate digital elevation surface of the terrain.

Methods
                It was decided that systematic point sampling, recording elevation (Z) at each  X, Y intersection was the most appropriate for this project; as the whole study area was only slightly larger than a square meter and the entire study area could be sampled efficiently in a relatively short time frame. A similar alternative to this approach would have been to record Z at the center of each grid square, however in some grid squares a sharp change in elevation near the center would make it difficult to determine which measurement to take. The stream box used was 114cm x 114cm, from these measurements it was determined that a grid with 6cm x 6cm squares would be ideal to capture sufficient data points for mapping. These divisions would result in 361 squares measuring 36 cm2 each.  Meter sticks were used as guides and thumbtacks were used to mark each 6cm point on all four sides (figure 1) and string was wrapped around them to form a physical grid (figure 2) to ensure accurate location of the Z points recorded. The top of the stream box was made to be sea level, since the string grid was at the same level as the top of the stream box and the grid could also serve as a guide for sea level during data collection. Where the terrain exceeded sea level, the string was pulled into the terrain without damaging the structures so that all string sat at sea level. After the grid system had been completed over the terrain an origin was set at the southeast corner of the stream box. To facilitate data collection a similar grid of smaller proportions was drawn on paper and the Z value for each X, Y coordinate recorded in its corresponding box. In cases where the terrain in an individual grid square changed sharply, two measurements were taken and depending on which plane the change happened along the X or Y value was given a decimal value ending in .5 and then a whole number. The collected data was then entered into an excel spreadsheet, beginning at the origin and working up each Y column along the X axis. 



figure 2
figure 1



Results/Discussion
                After grid squares with sharp contrasts in elevation were split into two separate Z values the final number of data points collected was 434. The Z value`s minimum was -13, maximum was 10, the mean was ~-2.151, and standard deviation was ~4.146. The X and Y values both ranged from 1 to 19. The sampling method chosen at the beginning of this project and ultimately utilized for data collection appears to have been the best method however fewer points could have been collected. The primary issue turned out to be the grid squares with sharp changes in elevation; this was corrected through the collection of two Z values within those grids and depending on which plane the change happened along the X or Y value was given a decimal value ending in .5 and then a whole number. Another issue was the quantity of data points to be collected. The process was expedited through the delegation of tasks; one person held the measuring stick, another read aloud the value, and a third recorded the data.

Conclusions
                The sampling technique used in this project was slightly tedious but a majority of the points taken represent a 36 cm2 section of the whole study area and the even distribution of points will give an unbiased, more accurate representation of the entire study area. Sampling is an effective way to create a reasonably accurate portrayal of spatial features on the Earth`s surface because surfaces that are closer together tend to be related to one another and data about a surface can be used to predict the nature of another surface that is close by. The activity of sampling a surface created in a small area provides an experience that would help with spatial sampling on a larger scale; it touches on the types of problems that could be encountered and the methods that can be used to solve these problems. After learning about the First Law of Geography, it seems the resulting dataset from this project was slightly excessive and fewer data points could have been collected in order to recreate the surface accurately. At the same time, the collection of data point was not costly and should result in a more accurate portrayal of the terrain that was created. 

Tuesday, October 4, 2016

Hadleyville Cemetery Part 3

Introduction
The town of Pleasant Valley would like to preserve some of the town`s history as well as prevent the sale of an already occupied plot via the development of a detailed and accurate map of the Hadleyville cemetery. This project comes with a few obstacles; all original records and maps have been lost, the monuments date back to 1865 making them difficult or impossible to read, and some burial plots are not marked at all. A simple map could be created to indicate location of marked plots within the cemetery, however this would not include any attribute information correlating to the individual plots. A GIS would include such attribute information as well as the spatial data of the cemetery, this would more comprehensively preserve information about the cemetery. A UAS capable of stable, high resolution photography was programmed to take a collection of aerial images of the cemetery in order to produce an optimal base map. The class used notebooks to record all observable attribute data pertaining to each individual burial plot and cellular phones to photograph the same data that was recorded manually to ensure accuracy.  This method of field data collection can be used to create a base map in a GIS and then digitize plots with associated attribute data included.

Study Area
                Pleasant Valley`s Hadleyville cemetery is located in west central Wisconsin, in Eau Claire county. The town of Pleasant Valley is in the southwest corner of the county (figure 1). Data was collected on a warm, sunny afternoon in early fall.
 
Figure 1
Methods
                  A UAS from the Phantom series was used for the aerial imagery. According to www.dji.com , the entire Phantom series contains a gimbal with 3-axis stabilization capabilities and are all created for high-level aerial photography. The class used notebooks to record data and cellular phones to photograph the same data that was recorded manually. The use of notebooks provides a hardcopy backup of the digitally collected data. The data recorded in notebooks is also data gathered by the human eye, which provides a different perception of the data as it is collected and can be used together with photographic images to ensure accurate interpretation. The use of a survey grade GPS receiver was attempted, however a complete list of points would have been extremely time consuming and a signal was not found when under trees. This GPS receiver was selected over a mapping grade receiver because the mapping grade GPS is generally accurate to within a meter, many plots are closer together than this. In order to consolidate all collected data and include all the same attributes for each section of attributes, an excel spreadsheet was created in a public Google doc. A cemetery grid pattern with lettered columns and numbered rows was created overtop a base map to create point IDs for each plot and to ensure accuracy of attribute input into the excel file from each contributing member. A primary attribute to be included was the name of the person occupying the plot and initially the presence of joint tombstones caused the problem of having more than one name attached to one point ID. This was solved by entering each name as a separate point ID and the inclusion of a notes field to denote such cases. When the excel file was complete it was saved as a csv file to be imported into ArcGIS. The GPS accuracy of the UAS enabled it to obtain a clear, centered image of the entire cemetery and a moderate pixel resolution allowed for zoom levels close enough to verify the presence of a difficult to view monument as well as for the optimal placement of two points on one monument. While digitizing points, the point ID attribute was also entered and the completed csv formatted table of attributes was brought into the layer, joined to the table of point IDs associated to the newly digitized points, and then exported as a new layer to make the join permanent (figure 2). From this result a layout was created with symbology representing alphabetical groupings of the last names (figure 3), which would make the location of a loved one easier for visitors.

Results/ Discussion
                The consolidation of data into a communal excel file containing normalized data fields greatly improved the efficiency of the creation of a GIS. This task would have otherwise been much more complicating due to the fact that data collection was completed by numerous groups and in different manners. Had the survey grade GPS been used to obtain a complete list of coordinates for every plot in the cemetery instead of manual data collection based off a basic diagram, this project would have taken much longer still. There is still room for error given the lack of a GPR system to located unmarked burial plots, however visual observation provided confirmation that plots lacking a tombstone tended to have some sort of indication that it was an occupied plot. Data collection could have been improved if a normalized data list had been created and a basic diagram had been drawn up before collection began and teams assigned to specific areas of the diagram for data collection. This would have reduced the initial confusion between groups and reduced the odds of missing data.
Figure 2


Figure 3



Conclusion
                Overall the use of digital and observational field data collection secured a successful final project. Each method of data collection contributed to the collection all data necessary to meet the goal of this project, to create a detailed and accurate map of the Hadleyville Cemetery. None of the data collection formats used could have singularly collected all that was necessary to create the GIS. There were a few broken monuments piled along the edge of the cemetery, missing information associated with a few plots, and there is a slim possibility that time has erased any evidence of an occupied plot. However, prior to this project the town of Pleasant Valley had no maps or records of the Hadleyville cemetery and this project has resulted in the identification of 158 occupants within this one and half acre cemetery. The resulting project can be edited in ArcGIS to digitize in any future additions or update information of existing data; it can also be used to create a web map online for interactive use by the public.