Frequently Asked Questions
Best Practices FAQs
How do I fix an edge-of-field bioreactors tool error? I believe “sheds_poly” corresponds to the polygon feature class of the drainage areas of each bioreactor site, which is an internal calculation of the tool. I tried saving the output Bioreactor shapefile to a new file geodatabase (.gdb) to see if perhaps there was not enough memory in the current file geodatabase, but the same error resulted. Only 1 field boundary was identified in this particular watershed as requiring tile drainage. Any thoughts?
Answer:
The ‘sheds_poly’ must be at least 10 acres. If there are no polygons left after removing the small ones in the feature class, the repair_Geometry method will likely fail. We should put a test in the code for this. Something like…
if int(arcpy.GetCount_management(sheds_poly).getOutput(0)) < 1: sys.exit(0
You could try modifying line 161 to remove the in_memory assignment, then look at the sheds_poly in the default.gdb or scratch folder to see if there any features there…
sheds_poly = arcpy.RasterToPolygon_conversion(sheds, "sheds_poly", "", "Value")
- Response: Your suggestion of modifying line 161 in the Bioreactor code worked. After running into additional errors with some of the other practice identification tools, I installed 10.4 and the tools I’d previously had problems with ran fine.
The NRCS Width is in meters. It's determined by converting the local runoff from hectares to sq meters, multiplying by 0.02 (2% of the contributing area to the buffer as recommended by NRCS) and then dividing by the buffer length, 250 meters by default.What is the unit of measurement for the NRCSWidth in the Riparian Function Tool Outputs?
I am working to hydro-condition a watershed that has a heavily developed area. Sometimes water goes into storm ponds so I leave that impeded flow alone, but sometimes water flows into underground systems where I can’t find the output. In that situation, how should I handle incorrect impeded flow?
Answer:
Is the urban area at the mouth of the stream or in the middle of the watershed? If it the urban area has no agricultural areas between itself and the mouth of the stream, no ag practices will be placed in those areas anyway so it'd likely be best to just use an educated guess to make the stream network look plausible and concentrate on the ag areas. If the stream network flows through the urban area then through ag areas afterwards, you might contact the city to see if they have any storm water infrastructure GIS files, particularly storm sewer mains. If so, you could simply burn the mains in as cutlines since the inlets/outfalls will fall along the mains. Otherwise, does the SIGGIS extension help locate any of the outfalls?
Image from the user: The original DEM is in UTM15 and the ACPF geodatabase is in UTM14. I changed the projection of the DEM to UTM14 to match.



I've ran into this when my flow network did not match up with the flow direction and accumulation rasters after running the manual cutter/dam builder. We encourage you to re-run the flow network tool after you finished making cuts.
- Response: We did re-run the flow network tool following the cuts. There is a HUC8-level breachline file we have to work from and we clip it down to the HUC12 level and project to UTM14 and we have to reproject the DEM to 14 with the new ACPF geodatabases as well. I am thinking there is something off with how the data is being projected. We did all the same steps using the previous generation of geodatabases and everything is running just fine.
If you believe that projecting the DEM is causing the trouble, you could leave the DEM in its native projection and project the ACPF fgdb to match. There is a 'Project ACPF Workspace' tool in the Utilities tool drawer. This tool will project the complete contents of your ACPF workspace to a projection of your choice...UTM zone 15, I am guessing...to the named output folder. I would be interested in hearing if your results are any different. Using this tool would help answer any questions you have about projecting DEMs.
I am working on the Precision Conservation Practice Siting Tools and have run my files through the Depression Identification Tool. Now, I am trying to go through each polygon to see which are true or artificial depressions and having trouble. How do I know which should be deleted?
Answer:
Images from the user: The first image is of some depressions that appear to be the result of drainage ditches. The second image looks like it is a true depression. Do you think this is correct? Should the drainage ditches ones be deleted?


I took a look at your images and I believe your interpretation is on track. That said, I have to wonder about your DEM. The very sharp angle and the interior polygons would suggest a very-high resolution DEM or maybe a DEM of poor quality? While I would expect some of those interior polygons…I call them ‘knots’…I think your images show a lot. I have attached an image below that shows depressions from a 3m DEM that has been conditioned somewhat. There is a tool in the ACPF>Utilities drawer that does some DEM conditioning that might be helpful. See DEM: pit fill / hole punch.

Response: I did have 1m DEM, but I resampled to a 2m DEM as suggested and then used raster calc so that the Z-factor was in cm. Do you think these 'knots' are going to cause problems? Should I simplify the polygons?
- No, I don’t think they will cause problems…but it may be indicative of the quality of your elevation model. You may want to run the pit fill/hole punch tool on this and inspect the output. It may provide a better model…less bumpy. The result would likely be more regularly shaped depressions
My DEM is currently in the NAD 83 UTM zone 16 projection which is the correct zone for our area and matches the projection of the data downloaded from the ACPF website. It says the vertical accuracy is 6.24 cm and the horizontal accuracy is 0.6 m. Should I resample my raster to have 7cm z-factor and 1-meter cell size?
Answer:
Before resampling, take a look at your settings. In ArcMap in the table of contents, right click on the DTM/DEM you are using and select properties. Click on the Source tab and email back with the following information.
- Cell Size
- Spatial Reference
- Linear Unit
Also, within the table of contents, can you share the high and low values of your DTM/DEM? This will help us determine if the units are in meters or feet. If you are using UTM Zone 16, then your horizontal units should be in meters. The cell size will tell us if it is a 1-meter cell or 3-meter cell or another cell size. If you are talking about vertical accuracy, that relates more to the amount of data collected when the lidar is flown, not to the cell sizes. Please share the above information and that will help us with your questions. Also - You are correct on the projection information. However, the x,y units and z-units of your DEM are not the same as horizontal and vertical accuracy. If you are in a UTM projection, your x,y units are likely in meters. If the horizontal accuracy of your data is .6 m, you are “able” to resample your DEM to a 1-meter resolution. However, we often find that this is too detailed, and would suggest you resample to a 2-meter DEM. Once you do this, you will go into the properties of your raster, and the cellsize should read 2,2. Now for the z-units. You need to know what units (feet, meter, or cm) your DEM z-values are currently in. This information is contained in the metadata from wherever you obtained the data, and is NOT the same as vertical accuracy. Like previously mentioned, you can get at this information by looking at the range of values in your DEM. If your z-units are in meters, a single grid cell may have a value of 246.57, which means the elevation at that cell is 246.57 meters. What I mean by converting the z-unit to cm integer is to take the entire raster and multiply it by 100 and convert to integer. It will look like this in raster calculator: Int (inDEM * 100.00). In the output raster, the 246.57 value will now be 24657 cm. The above formula is only if your z-units are in meters. If tey are in feet, you would multiply by 30.48 rather than 100.
Sometimes I've seen DTMs or digital terrain models represented within ArcGIS as TINs (triangular irregular networks) file structures or terrains (a large version of a TIN introduced a while back to handle big elevation data sets like lidar). I've been collecting Wisconsin lidar data sets and have seen some have a DTM folder with TINs in either Esri geodatabase or Autocad dwg CAD file formats. I think of DEMs or digital elevation models as regularly gridded sets of elevation values in some type of raster file format, like .img, .tif, or something else. The ACPF requires a DEM in raster gridded format, not a TIN. Since each HUC12 database is contained in a file geodatabase, the DEM should be added to this fgdb, and any output files will also be saved back to the same fgdb. If this suggested format is followed, rasters will always be in a file geodatabase raster format. We have done limited testing of writing the outputs to a folder (as a .tif, for example). It seems to work, but we have not thoroughly tested it. I’m not sure there’s much difference between a DTM and a DEM. A DEM is obtained from the “last return” points of the lidar collection, then those point elevations are converted into a gridded raster surface, which is the DEM. I believe we came across this before, and decided that a DTM is just fine, as long as it is in raster format. You will still see highways and ramps in a DEM or DTM.
The z-factor you enter into the Pit Fill / Hole Punch process is the depth of a depression you want to fill prior to main terrain processing. The value given in the document refers to value that is tied to the overall error that the original lidar collect had as a value. In essence, it is saying, fill all the little depressions that are less than 18cm deep as they could be artifacts of the lidar interpolation process. If your DEM horizontal and vertical units are in meters, it would be correct to put in .18 as the maximum fill depth. If your vertical units are in feet or in centimeters, you would need to adjust that number based on the conversion factor. Here is some additional info on DEM preparation/z-factor for use with the ACPF:
- ArcGIS does not provide any way to see the elevation (z) units. That information is stored in the metadata from wherever you got the data from.
- You should definitely resample your DEM so that your x,y units are in WHOLE meters. If you have a 2 meter DEM, your x,y units should read 2,2. Floating point values like .9144 will slow things down tremendously and may cause problems.
- Do you know what coordinate system your DEM is in? We store all the ACPF data in UTM NAD 83, and all input files should be in the same coordinate system. If your DEM is not in UTM, I would suggest first projecting your raster to UTM, then resampling so that your x,y units are in WHOLE meters. Next I would figure out what my z-units are in currently, then convert them to centimeter integer (we recommend cm integer z-units for faster and more reliable processing). You can do this in raster calculator (i.e. if your z-units are currently in feet, you would take the whole raster times 30.48 to convert to cm integer). You would then have x,y units in whole meters and z units in cm. Your z-unit would then become .01.
Preparing your DEM and ensuring you understand what format it is in is perhaps the most critical part of the entire process. You want to make sure your data is in the correct format before you start running the tools. The zfactor forms the relationship between XY coordinates a Z value units. If your Z units and XY units are in meters, the Z factor = 1. If Z units are cm, the Zfactor is .01. The fill depth determines the depth of small depressions that are filled.
While it would be possible to use a multipart polygon as a field, it is not recommended. Many of the tools in the ACPF use the individual field boundaries to calculate statistics…like mean field slope. Calculating a mean field slope across multiple, disparate polygons would likely return faulty or inconsistent results. I would strongly encourage you to use a single-part polygon structure for field boundaries. Regarding the assignment of new land use information, please look at the Update Edited Field Boundaries tool in the Utilities tool drawer. This tool will take update the Crop History (CH) and Land Use (LU6) tables using an edited field boundary feature class and the existing land use data (wsCDL20xx) in the file-geodatabase (fgdb). This tool is intended to allow users to refine the watershed’s field boundary feature class and update the relationship between the feature class and the Crop History and Land Use tables. This is necessary because the relationship between the feature class and the tables is based on the FBndID field, a unique field identifier. This FBndID field will be corrupted if the field boundaries are edited, and the relationship will no longer be valid. You should also take a look at the Get NASS CDL data by Year tool. This tool allows the user to add individual years of NASS Cropland Data Layer data to the watershed’s fgdb. The currently available HUC12 database carries NASS CDL data for 2009-2014. You can use this tool to add new (2015) or historic (pre-2009) data to the fgdb. If you update the NASS CDL data holdings and then use the Update Edited Field Boundaries tool, the crop History table will be expanded to hold the new data and the LU6 table will use the most recent 6 years of data. Together these tools enable the user to maintain the most current field boundaries and land use information. Let me know if you have any questions.
My boundary and buffer polygons keep getting deleted. In the hydro-conditioning process I am running the D8 Processing Tool, running the PD Flow Network, running the Depression Depth Tool, adding cutlines feature class, creating cutlines and then running the Manual Cutter Tool. Afterwards, when I run the PD Flow Network or the Depression Depth Tool after making edits my boundary and buffer lines get deleted. Why is this happening?
Answer:
I suspect that as part of your editing process you are inadvertently deleting features from the watershed boundary (bnd) and buffered boundary (buf) feature classes. If you set the feature class that you intend to edit (cut-lines or dam-lines) to ‘the only selectable feature class’, this should prevent you from including the bnd and buf features in your selection and the deleting them. You can set this condition individually for each feature class by right-clicking on the TOC entry and finding the Selection tab…or use the List By Selection tab at the top of the TOC to manage the selectable features.
Other FAQs
Topic: Database Prep
Do you have any fields that do not overlap with the DEM at all? The script is failing when attempting to determine the approximate # of 1-meter contours that can exist in each field. It does this by finding the total elevation range in each field, then dividing this by 1 meter (then rounding up or down to the closest whole #). Instead of returning a value, a “None” value is being returned (basically NODATA). This may happen if a field has no elevation values to go by (i.e. it lies entirely beyond the DEM). Moving forward, I can try to account for this in the script. We recommend clipping your DEM to the extent of the buffered watershed boundary, and I have not seen this problem as of yet.
- Response: Turns out, there were field boundaries that were outside of the buffered (DEM) area. Once the field boundary feature class was updated to only those that intersect the DEM and the tool worked properly.
Topic: Documentation
I’ve never seen a flow path randomly jump out of a channel without some kind of impeded flow causing the behavior. I would suggest using aerial images to help to show what is happening? If cuts are not working, placing a dam along the edge of the channel. If the line is straight, I suspect a no data error in the DEM (a bad mosaic of the original data).
Topic: TauDEM
Image from the user: This is the error message I am getting. I am wondering what the issue is.

TauDEM should run fine with ArcGIS 10.3.1. Have you restarted your computer since installing TauDEM? I’ve seen the path to TauDEM not get added until after a restart. If that doesn’t work, are there any files in the TDProcDir? There should be “TDFlowDir.tif” and “TDFill.tif”. If these exist, can you run the “D8 Contributing Area” tool from the TauDEM toolbox from within ArcMap, using “TDFlowDir.tif” as the input. Does this run successfully? You can also try running in the foreground.
- Response: Solved. I had to reinstall the most up to date version of TauDEM on my machine. Simple fix. TauDEM5.37 for 64 bit. It cleared everything up.





