Type hinting in WbW


Whitebox Workflows for Python v1.2 User Manual
Written by John Lindsay, PhD
© 2022-2024 Whitebox Geospatial Inc. All rights reserved.
www.whiteboxgeo.com

Introduction

What is Whitebox Workflows for Python?

Whitebox Workflows (WbW) is a Python library for advanced geoprocessing, including more than 400 functions for GIS and remote sensing analysis operations and for for manipulating common types of raster, vector and LiDAR geospatial data. Learn more about WbW at www.whiteboxgeo.com.

WbW allows you to write Python scripts for automated data processing in areas such as geographical information systems (GIS), remote sensing (image processing and LiDAR point cloud processing), digital elevation model (DEM) analysis, spatial hydrology, stream network analysis, and many related spatial analysis fields. WbW is developed at Whitebox Geospatial Inc, a Canadian-based geomatics company with deep roots in the geospatial industry and academia (Whitebox started at the University of Guelph).

While WbW is free for use, any financial support that that you can provide the project is greatly appreciated. You can also support the project through the purchase of a license for the professional-tier product, Whitebox Workflows for Python Professional (WbW-Pro), which includes many additional geoprocessing functions.

The WbW codebase is based on the open-source project WhiteboxTools Open Core (WbOC), also developed at Whitebox Geospatial Inc. Compared with WbOC, WbW has a number of advantages for Python based geoprocessing. The largest difference between the two projects is that WbW is compiled as a Python native extension module, much like NumPy and SciPy, whereas WbOC is a command-line application. Because WbW is a Python extension module, it affords a much deeper level of interaction with geospatial data, allowing users to manipulate raster, vector, and lidar data objects directly. This reduces the need to read/write files considerably, significantly improving the performance of complex workflows with many intermediate steps and reducing the wear on system hardware.

Interacting directly with spatial objects also means that WbW has a more natural scripting style, providing a much richer geoprocessing environment. For example, you can use Python directly for raster map algebra, rather than a raster calculator or individual tools (e.g. add, divide, etc.):
result = (raster1 + raster2) / 2.0

WbW also has an improved memory model for raster data. With WbOC, raster data are always stored in memory as 64-bit floats, which can lead to large memory requirements. WbW, however, stores rasters in computer memory using their native data format; a raster storing 16-bit integers or 32-bit floating point values will require substantially less memory.

Unlike the WbOC, WbW is not open-source, however, it is freeware and does not require you to purchased a license to use. In addition to the standard version of WbW there is also a professional-tier verison called Whitebox Workflows for Python Professional (WbW-Pro). A license for WbW-Pro, which can be purchased from www.whiteboxgeo.com, allows users to access dozens of extra functions for advanced geospatial analysis.

Getting Started

Installing Whitebox Workflows

You'll need to install WbW on your computer or virtual environment to use it. If you have Python installed on your computer, simply type the following line at the command prompt of your terminal application:

pip install whitebox-workflows

If your default Python is v2.X, you may need to use the pip3 command instead. You may wish to use a Python virtual environment (venv, Conda, etc.) to test the whitebox-workflows package but this isn't necessary.

WbW is supported on Windows (64-bit), Mac (Intel and ARM) and Linux (x86_64).

If you have an older version of WbW installed and want to update it to the latest version, simply type the following command:

pip install whitebox-workflows -U

Updating your WbW to a newer version won't impact your license. You should update every time a new version is released.

For a demonstration of the install process, please see the video below:

Running your first script

Now that we have WbW installed, it's time to start using it. Create a new Python file called wbw_test.py and type the following script into the file.

wbw_test.py

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()

print(wbe.version())

Notice the underscore in the WbW library name whitebox_workflows compared with the hyphen used in the pip package name whitebox-workflows.

This above script does a few things. First, it imports the WbEnvrionment class from the whitebox_workflows module. Next, we set up the Whitebox Environment. WbEnvrionment is the most important class contained within the whitebox_workflows module. It's the class that is used to manipulate environmental settings, e.g. setting the current working directory. Importantly, all of the tool functions for processing spatial data are methods of the WbEnvironment class, as are the functions for reading and writing spatial data. Lastly, the version() method is called, printing information about our installed version of WbW.

When you run the above script, it should print something similar to this:

Whitebox Workflows for Python v1.1.3 by Whitebox Geospatial Inc. 
Developed by Dr. John B. Lindsay, (c) 2022-2024

Description:
Whitebox Workflows for Python is an advanced geospatial data analysis platform 
and Python extension module.

If the script runs without error, we can be assured that WbW has been installed correctly on our system.

Now let's modify the script above to show us all of the tool functions that we have available to us:

wbw_test.py

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()

wbe.available_functions()

Depending on the version of WbW that you're running, you should see an ouput similar to this:

...
419. user_defined_weights_filter                  420. vector_hex_binning
421. vector_lines_to_raster                       422. vector_points_to_raster
423. vector_polygons_to_raster                    424. vector_stream_network_analysis
425. version                                      426. viewshed
427. visibility_index                             428. voronoi_diagram
429. watershed                                    430. watershed_from_raster_pour_points
431. weighted_overlay                             432. weighted_sum
433. wetness_index                                434. wilcoxon_signed_rank_test
435. write_function_memory_insertion              436. write_lidar
437. write_raster                                 438. write_vector
439. z_scores                                     440. zonal_statistics

The standard free version of WbW contains at least 440 tool functions, while the professional tier (WbW-Pro) contains over 500 tool functions.

The license_type property of the WbEnvironment class can tell you which version of WbW you are currently running.

license_type.py

from whitebox_workflows import license_info, WbEnvironment

wbe = WbEnvironment()
print(wbe.license_type) # prints either LicenseType.WbW or LicenseType.WbWPro

Scripting and type hints

Scripting with WbW is much easier when you use a good Python editor. Python editors, or general purpose programming editors like Visual Studio Code, will provide syntax highlighting, line numbering, and if configured correctly, autocomplete. Autocomplete features in particular are useful when using WbW, and allow you to explore the WbW application programming interface (API).


Type hinting in WbW


Type hinting in WbW

This feature can also help the programmer to learn about the arguments that are required by a function as they type.


Type hinting in WbW

You may also want to take advantage of Python's built-in help function to learn more about individual members of the WbW API, e.g. help(wbw.Raster.con) can be used to provide the help documentation for the con (conditional evaluation) method of the Raster class.

WbW has been designed to provide Python editors with type hints and default values of arguments. These features can greatly enhance the programming experience.

WbW and WbW-Pro

There are two versions, or tiers, of Whitebox Workflows, including the standard WbW tier and the professional tier, Whitebox Worfklows for Python Professional (WbW-Pro). The standard tier of WbW is free and includes over 400 tool functions. While WbW is free for use, any financial support that you can provide is greatly appreciated. WbW-Pro contains all of the same functions as the standard WbW product and adds access to approximately 65 tool additional functions for advanced geoprocessing. WbW is free while use of WbW-Pro requires users to purchase a license from www.whiteboxgeo.com.

To determine whether a particular tool function is a WbW or WbW-Pro use the whitebox_workflows.is_wbw_pro_function function.

wbw_pro_function.py

from whitebox_workflows import is_wbw_pro_function

print(f"filter_lidar is WbW-Pro function: {is_wbw_pro_function('filter_lidar')}")

You can also see which functions require a WbW-Pro license in the API associated with the WbEnvironment class and in the separate WbW-Pro help documentation.

Learn more about WbW-Pro

Here are some topics specifically relevant to the WbW-Pro tier product:

Registering a WbW-Pro license

Notice, you do not need to register a license to use the standard version of WbW. This section only pertains to WbW-Pro.

After you purchase a WbW-Pro license, you'll be redirected to a website providng you with an activation key and a customized license registration Python script like the one below. If you've purchased multiple seats, you'll be provided a single activation key that contains your purchased number of seats.

Be sure to copy your activation key, or the embedded Python script, before closing the redirect page. You will need this to activate your license.

register_license.py

import whitebox_workflows

# Be sure to replace the key below with your issued key; this one is just an
# example. Also update with your first and last name and email address. Note,
# by running the script, you are agreeing to the terms of the license, found
# on www.whiteboxgeo.com
whitebox_workflows.activate_license(
key="889d88d3c7ccc6c9d3ccc9cad3c8d3ced3cecbc6cbcfd3cbcacdc6c7d3cec9c9cec8c8cfc", 
firstname="Jane", 
lastname="Doe", 
email="jdoe@gmail.com", 
agree_to_license_terms=True
)

To register your license, copy the Python script from the license purchase redirect page into an empty Python file, update your name and email address, and then run the script. Once you have run your license activation script, you will receive an email that contains important information about your purchase, your license, as well as your floating-license user ID. It is important that you type your email correctly in the registration script.

Notice, it is important that use your correct email when registering your license or you will not be able to access your floating-license user ID and you will only be able to use the node-locked license on the computer system that you registered WbW from.

For a demonstration of the registration process, please see the video below:

Node-locked and floating licenses for WbW-Pro

A purchase of WbW comes with an equal number of node-locked and floating licenses. For example, if you've purchased two seats, you will be able to register two node-locked licenses on two computers and you will also be able to check-out two silmutaneous floating licenses at any time from any computer.

When you register your WbW-Pro license, WbW writes information about your license on your computer, stored in a configuation file. This will allow you to use WbW on your computer at any time, even when you are not connected to the Internet. This is a so-called 'node-locked' license (i.e., it is tied to a specific computer). You can use your node-locked WbW-Pro license in the same way as we saw WbW used previously, except that you should now have access to the full set of tool functions:

sample_wbwpro_script.py

from whitebox_workflows import available_functions, WbEnvironment

# Checks for a valid node-locked WbW license stored locally
wbe = WbEnvironment()

available_functions(wbe) # You should now see more than 500 functions listed

# Do some processing here, including calling some WbW-Pro tool functions...

The code above uses a node-locked WbW license, i.e., the license is tied to the machine that it was registered on. Sometimes, you will want to use WbW in other computing environments, e.g. on different computers within a network, or on cloud-based computing platforms like Google Colab. When this is the case, you need to use a floating-license user ID. You will have been provided a unique floating license ID consisting of a three-word string (e.g. 'white-bolting-camel') in an email that was sent to you after registering your license activation code. This string connects your floating license with the WbW-Pro license server and allows you to use WbW-Pro on any computer or computing environment (e.g., Google Colab or Jupyter Notebooks).

Your floating-license user ID is unique to you and you should keep this string private. Sharing your user ID with other people will result in the early termination of your license.

Using your floating license is easy. Instead of using the usual means of intializing a WbEnvironment, you simply need to specify your floating-license user ID as an optional parameter, being sure to check-in the license when you have completed:

sample_wbwpro_script.py

from whitebox_workflows import available_functions, WbEnvironment

# Checks for a valid floating license stored on remote server
wbe = WbEnvironment('your-license-id') 
try:
    available_functions(wbe) # You should now see more than 500 functions listed

    # Do some processing here including calling some WbW-Pro tool functions...
except Exception as e:
  print("The error raised is: ", e)
finally:
    wbe.check_in_license('your-license-id')

Using the floating-license requires Internet access. If you are unable to access the Internet for a time, you should fall back on your node-locked license instead, assuming you are on the computer used to register the license.

A note on checking-in your license: Because the floating license uses a check-out/check-in model, you must remember to check-in your license after you have completed processing your script. The last thing you do in each script, therefore, should be to call the WbEnvironment.check_in_license method. There is no need to check-in your node-locked license.

When we specify our floating-license user ID in the WbEnvironment initializer, the program will communicate with the Whitebox Geospatial Inc. remote license server web app. It will check to see if there is an available seat associated with the specified user ID. Importantly, while your script is running, and until you check-in your license, it will not be available for use again. This means that it is very important that the last thing you successfully check-in the license at the end of the script, which is why we call check_in_license within the finally block of a try-finally construct in the script above. This way, if the script throws an error, we can still be sure that the license will be checked back in and we'll be able to use it again in the future.

If all of the existing 'seats' of a license are unavailable because they are either in use, or haven't yet been checked back in, you will receive an error indicating that 'All floating licenses are currently in use'. The number of 'users' that you specified when you purchased your WbW license will indicate how many silmutaneous floating license seats you have to work with.

Sometimes a script may panic, which is an unrecoverable type of error. This happens for example, if you provide an unexpected input parameter to a function. When this is the case, our checked-out license may not be checked back in correctly. What do we do in this case? Well, after a certain amount of time (usually two hours), an un-checked-in license will be returned to the pool of available licenses. But, of course, that leaves us in a difficult situation for those hours wihtout being able to run further WbW-Pro scripts. When this occurs, we can call the check_in_license function associated directly with the whitebox_workflows module, i.e. whitebox_workflows.check_in_license('your-license-id'). Note, you should always prefer using WbEnvironment.check_in_license('your-license-id) over whitebox_workflows.check_in_license('your-license-id') because the latter will throw an error if there is no unchecked-in license to check-in. However, in the event that a script panic results in a 'zombie' floating license seat, this method is a usable solution to get you up-and-running with WbW geoprocessing once again.

from whitebox_workflows import check_in_license

print(check_in_license('your-license-id')) # print to confirm the successful check-in

How much time remains on my WbW-Pro license?

To determine how much time is remaining on your license, you may use the license_info method:

wbwpro_license_duration.py

from whitebox_workflows import license_info

# on the host machine with the node-locked license...
print(license_info())

# or for a floating license...
print(license_info('your-license-id')) # specify your floating-license user ID

Transferring and deactivating WbW-Pro licenses

Licenses are transferable, if for example you would like to move your node-locked license to a different system. For this, you need to use the whitebox_workflows.transfer_license() method, which will issue you a new activation key, with the number of days remaining on your license; you may then use the issued key to register WbW-Pro on the other machine. Transferring a license will deactivate the license on the current computer.

wbwpro_transfer_license.py

from whitebox_workflows import transfer_license

transfer_license()
# Note, this function will print out the information that you will need to register the
# license on another computer, including the required activation key. Also note that
# after the license has been transferred, there may be a print out that says something 
# like, "The data in the license file appears to be corrupt..." This is simply because
# after transfering the license, WbW can no longer be used on this current machine,
# without registering another license. In fact, if you would like to re-register it 
# on the current computer using the issued activation code, that will work fine too.

Similarly, the whitebox_workflows.deactivate_license() method is used to deactivate a license, although note that this simply removes the license but does not uninstall WbW from your system. To uninstall the software, use pip uninstall whitebox-workflows. The only reason to deactivate a license is if you no longer want to use WbW-Pro.

Setting up the Whitebox Environment

We saw in an earlier section that the WbEnvironment class is used to set up the Whitebox geoprocessing environment. This important initialization step should be one of the first things that we do in most WbW scripts.

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment() # Create a WbEnvironment object, here named wbe.
# We can use this object to provide settings to tools that are run later in the script.
# If you aren't on the computer that you registered WbW on, you may use your
# floating-license user ID as a parmeter, e.g. wbe = wbw.WbEnvironment('cute-flying-pig').
# This ID will have been emailed to you upon registration.

# `max_procs` determines the number of processors used by functions that are parallelized. 
# If set to -1, the default, all available processors will be used.
wbe.max_procs = -1

# To limit tools to a certain number of processors, set `max_procs` to a positive whole 
# number less than the number of system processors. This can be important in cloud-based
# and distributed computing environments where you may not want WbW to fully utilize all
# available processors.
wbe.max_procs = 4

# `verbose` determines whether tool output sto stdout (`wbe.verbose=True`), or if
# output is suppressed (`wbe.verbose=False`). Tools are often very chatty, outputting
# frequent updates of progress. When you're running long workflows, it can be useful to
# turn this stream of tool output on and off during critical parts of the workflow.
wbe.verbose = True
# Run something critical...
wbe.verbose = False
# Run something less critical...

# Of course, when verbose=False, you will not be able to see any warnings or errors 
# issued by the tool, so this may not be desirable while testing a script.

# You can set and get the working directory as follows:
wbe.working_directory = '/path/to/my/data'

print(wbe.working_directory)

# How much time is remaining on your license?
print(wbw.license_info()) # Also takes the optional floating-license user ID.

# The working directory is the current path used to find data resources. When reading and 
# writing data, the working directory is the default location in which the data are read 
# from and written to.

my_raster = wbe.read_raster('image1.tif') # Will search for file "/path/to/my/data/image1.tif"

# You can also use full path names when specifying data.
my_raster = wbe.read_raster('/other/path/to/my/data/image2.tif')

# You can update the working directory multiple times in a script.
wbe.working_directory = '/other/path/to/my/data'
my_vector = wbe.read_vector('vector1.shp') # Will search for file "/other/path/to/my/data/vector1.shp"

Working with Whitebox Workflows

Sample datasets

There are a number of available sample datasets that can be readily used to test Whitebox Workflows for Python. The following is a description of all available datasets:

Dataset NameFile NameDescriptionCompressed Size
Guelph_landsatband1.tif...band7.tif7 bands of a sub-area of a Landsat 5 data set10.9 MB
Grand_JunctionDEM.tifA small digital elevation model (DEM) in high-relief5.8 MB
GTA_lidarGTA_lidar.lazAn airborne lidar point cloud, in LAZ format54.3 MB
jay_brookjay_brook.lazAn airborne lidar point cloud, in LAZ format76.3 MB
Jay_State_ForestDEM.tifA lidar raster digital elevation model (DEM)27.7 MB
Kitchener_lidarKitchener_lidar.lazAn airborne lidar point cloud, in LAZ format41.6 MB
London_air_photoLondon_air_photo.tifA high-res RGB air photo87.3 MB
mill_brookmill_brook.lazAn airborne lidar point cloud, in LAZ format49.9 MB
peterborough_drumlinspeterborough_drumlins.tifAn lidar raster digital elevation model (DEM)22.0 MB
Southern_Ontario_roadsroads_utm.shpVector roads layer for a section of Southern Ontario7.1 MB
StElisAkStElisAk.lazAn airborne lidar point cloud, in LAZ format54.5 MB

The data can be downloaded using the download_sample_data function within the whitebox_workflows module. For example,

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()

wbe.working_directory = wbw.download_sample_data('Kitchener_lidar')
print(f'Data have been stored in: {wbe.working_directory}')

The data will be downloaded to a location within your HOME directory and the download_sample_data function will return the address of the dataset directory. This can be useful for updating the WbEnvironment.working_directory property, as in the script above. The data will be downloaded in a compressed file format (zip) and will be automatically decompressed after the download has completed. The download_sample_data function will automatically terminate after ten minutes. You may encounter this issue if you attempt to download some of the larger sample datasets using a slower Internet connection.

Reading and writing data

WbW supports reading and writing many types of raster, vector, and LiDAR data formats.

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()

wbe.working_directory = 'path/containing/data/files'

# Reading raster data
my_raster = wbe.read_raster('file_name.tif')

# Or read_rasters (with an 's') for multiple rasters at once...
my_raster1, my_raster2 = wbe.read_rasters('file_name1.tif', 'file_name2.tif')

# Writing raster data
wbe.write_raster(my_raster, 'output.tif')

# You can also ask WbW to compress the written raster
wbe.write_raster(my_raster, 'output.tif', compress=True) # Only GeoTIFF compression is supported.

# Reading vector data
my_vector = wbe.read_vector('file_name.shp')

my_vector1, my_vector2 = wbe.read_vectors('file_name1.shp', 'file_name2.shp')

# Writing vector data
wbe.write_vector(my_vector, 'output.shp')

# Reading LiDAR data
my_lidar = wbe.read_lidar('file_name.laz')

my_lidar1, my_lidar2 = wbe.read_lidars('file_name1.laz', 'file_name2.laz')

# Writing LiDAR data
wbe.write_lidar(my_lidar, 'output.laz')

Supported Data Formats

Raster formats

WbW can currently support reading/writing raster data in several common formats.

FormatExtensionReadWrite
GeoTIFF*.tif, *.tiffXX
Big GeoTIFF*.tif, *.tiffXX
Esri ASCII*.txt, *.ascXX
Esri BIL*.bil, *.hdrXX
Esri Binary*.flt and *.hdrXX
GRASS ASCII*.txt, *.ascXX
Idrisi*.rdc and *.rstXX
SAGA Binary*.sdat and *.sgrdXX
Surfer ASCII*.grdXX
Surfer Binary*.grdXX
Whitebox*.tas and *.depXX

Throughout this manual code examples that manipulate raster files all use the GeoTIFF format (.tif) but any of the supported file extensions can be used in its place.

WbW is able to read GeoTIFFs compressed using the PackBits, DEFLATE, and LZW methods. Compressed GeoTIFFs, created using the DEFLATE algorithm, can also be output from any tool that generates raster output files by using compress=True.

Vector Formats

At present, there is limited support in WbW for working with vector geospatial data formats. The only supported vector format is the ESRI Shapefile. Shapefiles geometries (.shp) and attributes (.dbf) can be read and written.

While the Shapefile format is extremely common, it does have certain limitations for vector representation. For example, owing to their 32-bit indexing, Shapefiles are limited in the number of geometries that can be stored in these files. Furthermore, Shapefiles are incapable of storing geometries of more than one type (point, lines, polygons) within the same file. As such, the vector-related tools in WhiteboxTools also carry these same limitations imposed by the Shapefile format.

Point Cloud (LiDAR) Formats

LiDAR data can be read/written in the common LAS and compressed LAZ data formats.

Using tool functions

Most of WbW's tools exist as functions of the WbEnvironment class.

from whitebox_workflows import WbEnvironment
import math

wbe = WbEnvironment()
wbe.verbose = True
wbe.max_procs = -1

# Let's begin by downloading the Whitebox Workflows 'Guelph_landsat' sample data
wbe.working_directory = wbw.download_sample_data('Guelph_landsat')
print(f'Data have been stored in: {wbe.working_directory}')

# Read some of the image bands into memory
band2, band3, band4, band5 = wbe.read_rasters('band2.tif', 'band3.tif', 'band4.tif', 'band5.tif')

# Now let's call the create_colour_composite tool
true_colour_composite = wbe.create_colour_composite(
    red=band4, 
    green=band3, 
    blue=band2,
    enhance=False,
    treat_zeros_as_nodata=False
)
# The result 'true_colour_composite' is an in-memory raster. If we want to
# save it to disc to visualize it, we need to call 'write_raster'.
# wbe.write_raster(true_colour_composite, 'true_cc.tif', compress=True) # Uncomment this line to save to file

# Notice, that we don't need to specify the argument names of positional
# arguments (red, green, and blue above), and that we don't need to specify
# values for optional arguments if we accept their default values.
false_colour_composite = wbe.create_colour_composite(band5, band4, band3, enhance=False)
# wbe.write_raster(false_colour_composite, 'false_cc.tif', compress=True) # Uncomment this line to save to file

# If we don't want to see all of the progress updates output to stdout, 
# turn verbose mode off. But we also won't see errors or warnings.
wbe.verbose = False

# Now let's perform some image enhancements...
bce = wbe.balance_contrast_enhancement(true_colour_composite)

dds = wbe.direct_decorrelation_stretch(bce, achromatic_factor=0.3)
wbe.write_raster(dds, 'final_true_cc.tif', True)

# We can overwrite objects as well.
bce = wbe.balance_contrast_enhancement(false_colour_composite)

dds = wbe.direct_decorrelation_stretch(bce, achromatic_factor=0.3)
wbe.write_raster(dds, 'final_false_cc.tif', True)

There are also a large number of functions associated with the Raster class for manipulating raster data sets. Each of these Raster functions allow for Python-based raster algebra operations. For example:

# Calculate the normalized difference vegetation index...
ndvi = (band5 - band4) / (band5 + band4)
wbe.write_raster(ndvi, 'ndvi.tif', compress=True)

# use a multiplier and offset to adjust the values of a raster
multiplier = 0.012152
offet = -60.76071
sun_elev = math.radians(64.31337609)
reflectance = (band1 * multiplier + offset) * math.sin(sun_elev)

max_val = band1.max(band2)

Working with raster data

from whitebox_workflows import PhotometricInterpretation, RasterDataType, WbEnvironment

wbe = WbEnvironment()
wbe.verbose = True
wbe.max_procs = -1

# Let's begin by downloading the Whitebox Workflows 'Jay_State_Forest' sample data
wbe.working_directory = wbw.download_sample_data('Jay_State_Forest')
print(f'Data have been stored in: {wbe.working_directory}')

# Now read the 'DEM.tif' file...
dem = wbe.read_raster('DEM.tif')

# The RasterConfigs of a Raster object contains useful metadata about the Raster.
print(f'Rows: {dem.configs.rows}')
print(f'Columns: {dem.configs.columns}')
print(f'Resolution (x direction): {dem.configs.resolution_x}')
print(f'Resolution (y direction): {dem.configs.resolution_y}')
print(f'North: {dem.configs.north}')
print(f'South: {dem.configs.south}')
print(f'East: {dem.configs.east}')
print(f'West: {dem.configs.west}')
print(f'Min value: {dem.configs.minimum}')
print(f'Max value: {dem.configs.maximum}')
print(f'EPSG code: {dem.configs.epsg_code}') # 0 if not set
print(f'Nodata value: {dem.configs.nodata}')
# What data type are stored in raster grid cells?
# See the RasterDataType class for more info.
print(f'Data type: {dem.configs.data_type}') 
# What is the photometric interpretation, continuous, categorical, RGB, etc.?
# See the PhotometricInterpretation class for more info.
print(f'Photometric interpretation: {dem.configs.photometric_interp}')

# We create new rasters most frequently by copying the RasterConfigs from another
# existing Raster object. We can also create a new RasterConfigs manually but
# when we want to create a new Raster that has the same rows, columns and extent
# as another Raster, copying the other Raster's RasterConfigs, and modifying it
# as needed, is a good way forward.
out_configs = dem.configs

# Once you create a new Raster, you cannot change certain things about it, such
# as the number of rows and columns and the data type. The RasterDataType must
# be able to hold the data values. In the case below, we are reclassifying the
# raster to a Boolean, with 1's and 0's. So we really only need small, integer
# level data. I16 is used in this case to allow for NoData values, which will
# be set to -32768, the smallest possible 16-bit int.
out_configs.data_type = RasterDataType.I16 
out_configs.nodata = -32768.0
out_configs.photometric_interp = PhotometricInterpretation.Categorical

# Now let's create the new raster, based on our customized RasterConfigs...
high_areas = wbe.new_raster(out_configs)

# When we create a new raster, it is initially filled with NoData values, as
# set in its RasterConfigs.
print(f'Cell(500, 500) = {high_areas[500, 500]}') # = -32768.0

# Let's manipulate the raster data at the individual grid cell level.
print("Finding high elevations")
old_progress = -1
for row in range(dem.configs.rows):
    for col in range(dem.configs.columns):
        elev = dem[row, col] # Read a cell value from a Raster
        if elev > 800.0 and elev != dem.configs.nodata:
            high_areas[row, col] = 1.0 # Write the cell value of a Raster

            # Regardless of the RasterDataType used to store the cell
            # data in memory and in file, data are always passed to and
            # returned from rasters as floats. Note that the cell value
            # is set to 1.0 above and not 1.
        elif elev != dem.configs.nodata:
            # We must do the check for NoData, or else we'll replace
            # NoData values in the input raster with 0's in the output. 
            # NoData in must be NoData out.
            high_areas[row, col] = 0.0
    
    # Update the progress after each completed row scan.
    progress = int(((row + 1.0) / dem.configs.rows) * 100.0)
    if progress != old_progress:
        old_progress = progress
        print(f'Progress: {progress}%')

# Write the new Raster to file.
print('Saving data to file...')
wbe.write_raster(high_areas, 'high_areas.tif', compress=True)

# This allows for very fine-grained raster manipulation for custom data processing.
# But if the same functionality exists within the WbW toolset, you should always 
# prefer the native solution, because it will be faster than the Python alternative. 
# The code above could have been more efficiently processed using the following:
high_areas = dem > 800.0

Working with vector data

from whitebox_workflows import AttributeField, FieldData, FieldDataType, VectorGeometryType, WbEnvironment

wbe = WbEnvironment()

# Let's begin by downloading the Whitebox Workflows 'Southern_Ontario_roads' sample data
wbe.working_directory = wbw.download_sample_data('Southern_Ontario_roads')
print(f'Data have been stored in: {wbe.working_directory}')

# Read in the roads file
roads = wbe.read_vector('roads_utm.shp')

# Let's see some of the properties of this file...
print(f'Vector geometry type: {roads.header.shape_type}') # A VectorGeometryType.PolyLine
print(f'Min X: {roads.header.x_min}')
print(f'Max X: {roads.header.x_max}')
print(f'Min Y: {roads.header.y_min}')
print(f'Max Y: {roads.header.y_max}')
print(f'Number of records: {roads.num_records}')
print(f'Projection: {roads.projection}')

# To retrieve a vector geometry, use the [] syntax.
geom = roads[100]
print(f'Geometry type: {geom.shape_type}')
print(f'Num vertices: {geom.num_points}')
print(f'Num parts: {geom.num_parts}')
print(f'Min X: {geom.x_min}') # There min/max x, y, measure, and z
print(f'First vertex: ({geom.points[0].x}, {geom.points[0].y})')

# What about the file attributes?
print(f'Num records: {roads.attributes.header.num_records}') # Should be same as roads.num_records
print(f'Num fields: {roads.attributes.get_num_fields()}')

# Retrieve and print the attribute fields
att_fields = roads.get_attribute_fields()
for i in range(len(att_fields)):
    print(att_fields[i])

# What's the index of a specific field, by attribute name?
index_of_road_class = roads.get_attribute_field_num("ROAD_CLASS")

# Let's create a new Vector object...

# First, we need to create the attribute fields used for the new vector
out_att_fields = [
    AttributeField("FID", FieldDataType.Int, 6, 0),
    AttributeField("SRC_FID", FieldDataType.Int, 6, 0),
]

# Now create the Vector itself.
out_roads = wbe.new_vector(VectorGeometryType.PolyLine, out_att_fields, proj=roads.projection)

# You can also add a new attribute field after creating the file.
out_roads.add_attribute_field(att_fields[index_of_road_class])

# Now let's filter all the records to find those geometries with a ROAD_CLASS of indicating major highways
old_progress = -1
fid = 1
for i in range(roads.num_records):
    road_class = roads.get_attribute_value(i, 'ROAD_CLASS').get_as_string().lower()
    if 'freeway' in road_class or 'collector' in road_class or 'expressway' in road_class:
        geom = roads[i]
        out_roads.add_record(geom) # Add the record to the output Vector

        # Create the output attribute record...
        att_rec = roads.get_attribute_record(i)
        rec_data = [
            FieldData.new_int(fid), 
            FieldData.new_int(i+1),
            att_rec[index_of_road_class]
        ]
        fid += 1
        # then add it to the table.
        out_roads.add_attribute_record(rec_data, deleted=False)

    # Update the progress after completing each 1% of the records.
    progress = int((i + 1.0) / roads.num_records * 100.0)
    if progress != old_progress:
        old_progress = progress
        print(f'Progress: {progress}%')


# Write the output to file.
print('Writing vector to file...')
wbe.write_vector(out_roads, 'out_roads.shp')

Working with LiDAR data

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()

# Let's begin by downloading the Whitebox Workflows 'Kitchener_lidar' sample data
wbe.working_directory = wbw.download_sample_data('Kitchener_lidar')
print(f'Data have been stored in: {wbe.working_directory}')

# Read in an existing lidar data set
lidar = wbe.read_lidar('Kitchener_lidar.laz')

# To create a new Lidar object, you need a LidarHeader, which documents metadata
# about a LiDAR file. It is the LiDAR equivalent to the RasterConfigs.
print(f'File creation day: {lidar.header.file_creation_day}')
print(f'File creation year: {lidar.header.file_creation_day}')
print(f'Generating software: {lidar.header.generating_software}')
num_points = lidar.header.get_num_points()
print(f'Number of points: {num_points}')
print(f'Version major: {lidar.header.version_major}')
print(f'Version minor: {lidar.header.version_minor}')
print(f'Point format: {lidar.header.point_format}')
print(f'Min X: {lidar.header.min_x}')
print(f'Max X: {lidar.header.max_x}')
print(f'Min Y: {lidar.header.min_y}')
print(f'Max Y: {lidar.header.max_y}')
print(f'Min Z: {lidar.header.min_z}')
print(f'Max Z: {lidar.header.max_z}')

# Now, create a new Lidar object. In doing so, the new Lidar object will copy 
# over some of the properties in the source LidarHeader, but won't copy any of 
# the things like generating software, creation day/year, point extent values,
# or the source point data. It's a newly initialized file ready to receive its
# own point data.
lidar_out = wbe.new_lidar(lidar.header)

# You likely want to copy over the VariableLengthRecord (VLR) data too. VLRs
# usually contain important information, such as the coordinate reference 
# system.
lidar_out.vlr_data = lidar.vlr_data

print('Filtering point data...')
old_progress = -1
for i in range(num_points):
    # Notice that if the file does not contain time, colour, or waveform data,
    # each of these will simply be None. You can use the has_time_data(),
    # has_colour_data(), and has_waveform_data() methods to determine if these
    # data are stored in the file.
    point_data, time, colour, waveform = lidar.get_point_record(i)

    # The PointData returned by the get_point_record method has the raw
    # untransformed point coordinate information as well as all the info
    # about point intensity, class, return values, etc. If you simply want
    # the transformed x,y,z coordinates, use the get_transformed_xyz method
    # instead.
    
    # Now let's filter the data based on return data...
    if point_data.is_first_return() or point_data.is_intermediate_return():
        # Save the point to lidar_out
        lidar_out.add_point(point_data, time, colour, waveform)

    # Update the progress once we've completed another 1% of the points.
    progress = int((i + 1.0) / num_points * 100.0)
    if progress != old_progress:
        old_progress = progress
        print(f'Progress: {progress}%')


# Write lidar_out to file
wbe.write_lidar(lidar_out, "new_lidar.laz")

WbW Tutorials

We have created a number of WbW tutorials that use Jupyter Notebooks.

Tool function documentation

WbW Application Programming Interface (API)

The following is a listing of function signatures, including argument names and default values, for the purpose of serving as a quick reference. The tool function documentation contains this same information as well as associated help docs. Notice that the WbEnvironment class information lists both the WbW and WbW-Pro tier product functions.

module whitebox_workflows


def activate_license(key: str, firstname: str, lastname: str, email: str, agree_to_license_terms: bool) -> None: ...

def check_in_license(key: str) -> str: ...

def deactivate_license() -> None: ...

def download_sample_data(data_set: str) -> str: ...

def license_info() -> None: ...

def transfer_license() -> None: ...

class AttributeField

@property
def name(self) -> str: ...

@property
def field_type(self) -> int: ...

@property
def field_length(self) -> int: ...

@property
def decimal_count(self) -> int: ...

@staticmethod
def new(name: str, field_type: FieldDataType, field_length: int, decimal_count: int) -> AttributeField: ...

class AttributeHeader

@property
def version(self) -> int: ...

@property
def year(self) -> int: ...

@property
def month(self) -> int: ...

@property
def day(self) -> int: ...

@property
def num_records(self) -> int: ...

@property
def num_fields(self) -> int: ...

@property
def bytes_in_header(self) -> int: ...

@property
def bytes_in_record(self) -> int: ...

@property
def incomplete_tansaction(self) -> int: ...

@property
def encryption_flag(self) -> int: ...

@property
def mdx_flag(self) -> int: ...

@property
def language_driver_id(self) -> int: ...

class BoundingBox

@property
def min_x(self) -> float: ...

@min_x.setter
def min_x(self, value: float) -> None: ...

@property
def min_y(self) -> float: ...

@min_y.setter
def min_y(self, value: float) -> None: ...

@property
def max_x(self) -> float: ...

@max_x.setter
def max_x(self, value: float) -> None: ...

@property
def max_y(self) -> float: ...

@max_y.setter
def max_y(self, value: float) -> None: ...

@staticmethod
def new(min_x: float, max_x: float, min_y: float, max_y: float) -> Point2D: ...

@staticmethod
def from_two_points(p1: Point2D, p2: Point2D) -> BoundingBox: ...

def initialize_to_inf(self) -> None: ...

def get_height(self) -> float: ...

def get_width(self) -> float: ...

def is_point_in_box(self, x: float, y: float) -> bool: ...

def overlaps(self, other: BoundingBox) -> bool: ...

def nearly_overlaps(self, other: BoundingBox) -> bool: ...

def intersects_edge_of(self, other: BoundingBox) -> bool: ...

def entirely_contained_within(self, other: BoundingBox) -> bool: ...

def within(self, other: BoundingBox) -> bool: ...

def entirely_contains(self, other: BoundingBox) -> bool: ...

def contains(self, other: BoundingBox) -> bool: ...

def intersect(self, other: BoundingBox) -> BoundingBox: ...

def expand_to(self, other: BoundingBox) -> None: ...

def contract_to(self, other: BoundingBox) -> None: ...

def expand_by(self, value: float) -> None: ...

def contract_by(self, value: float) -> None: ...

class ColourData

@property
def red(self) -> int: ...

@red.setter
def red(self, value: int) -> None: ...

@property
def green(self) -> int: ...

@green.setter
def green(self, value: int) -> None: ...

@property
def blue(self) -> int: ...

@blue.setter
def blue(self, value: int) -> None: ...

@property
def nir(self) -> int: ...

@nir.setter
def nir(self, value: int) -> None: ...

class DateData

@property
def year(self) -> int: ...

@property
def month(self) -> int: ...

@property
def day(self) -> int: ...

class FieldData

@staticmethod
def new() -> FieldData: ...

@staticmethod
def new_int(value: int) -> FieldData: ...

@staticmethod
def new_real(value: float) -> FieldData: ...

@staticmethod
def new_text(value: str) -> FieldData: ...

@staticmethod
def new_date(value: DateData) -> FieldData: ...

@staticmethod
def new_bool(value: bool) -> FieldData: ...

@staticmethod
def new_null() -> FieldData: ...

def get_type(self) -> FieldDataType: ...

def get_value_as_f64(self) -> float: ...

def get_as_string(self) -> str: ...

class FieldDataType

Bool: int
Date: int
Int: int
Real: int
Text: int

class GlobalEncodingField

def __init__(self):

class LicenseType

WbW: str
WbWPro: str

class LidarHeader

@property
def file_signature(self) -> str: ...

@property
def file_source_id(self) -> int: ...

@property
def global_encoding(self) -> GlobalEncodingField: ...

@property
def project_id_used(self) -> bool: ...

@property
def project_id1(self) -> int: ...

@property
def project_id2(self) -> int: ...

@property
def project_id3(self) -> int: ...

@property
def project_id4(self) -> Tuple[int]: ...

@property
def version_major(self) -> int: ...

@property
def version_minor(self) -> int: ...

@property
def system_id(self) -> str: ...

@property
def generating_software(self) -> str: ...

@property
def file_creation_day(self) -> int: ...

@property
def file_creation_year(self) -> int: ...

@property
def header_size(self) -> int: ...

@property
def offset_to_points(self) -> int: ...

@property
def number_of_vlrs(self) -> int: ...

@property
def number_of_extended_vlrs(self) -> int: ...

@property
def offset_to_ex_vlrs(self) -> int: ...

@property
def point_record_length(self) -> int: ...

@property
def point_format(self) -> int: ...

@property
def number_of_points_old(self) -> int: ...

@property
def number_of_points(self) -> int: ...

@property
def number_of_points_by_return_old(self) -> Tuple[int, int, int, int, int]: ...

@property
def number_of_points_by_return(self) -> Tuple[int, int, int, int, int, int, int, int, int, int, int, int, int, int, int]: ...

@property
def x_scale_factor(self) -> float: ...

@property
def y_scale_factor(self) -> float: ...

@property
def z_scale_factor(self) -> float: ...

@property
def x_offset(self) -> float: ...

@property
def y_offset(self) -> float: ...

@property
def z_offset(self) -> float: ...

@property
def max_x(self) -> float: ...

@property
def min_x(self) -> float: ...

@property
def max_y(self) -> float: ...

@property
def min_y(self) -> float: ...

@property
def max_z(self) -> float: ...

@property
def min_z(self) -> float: ...

@property
def waveform_data_start(self) -> int: ...

def get_num_points(self) -> int: ...

class LidarPointData

@property
def x(self) -> int: ...

@property
def y(self) -> int: ...

@property
def z(self) -> int: ...

@property
def intensity(self) -> int: ...

@property
def point_bit_field(self) -> int: ...

@property
def class_bit_field(self) -> int: ...

@property
def scan_angle(self) -> int: ...

@property
def user_data(self) -> int: ...

@property
def point_source_id(self) -> int: ...

@property
def is_64bit(self) -> bool: ...

def get_32bit_from_64bit(self) -> Tuple[int, int]: ...

def return_number(self) -> int: ...

def set_return_number(self, value: int) -> None: ...

def number_of_returns(self) -> int: ...

def set_number_of_returns(self, value: int) -> None: ...

def is_only_return(self) -> bool: ...

def is_multiple_return(self) -> bool: ...

def is_early_return(self) -> bool: ...

def is_late_return(self) -> bool: ...

def is_last_return(self) -> bool: ...

def is_first_return(self) -> bool: ...

def is_intermediate_return(self) -> bool: ...

def scan_direction_flag(self) -> bool: ...

def set_scan_direction_flag(self, value: bool) -> None: ...

def edge_of_flightline_flag(self) -> bool: ...

def set_edge_of_flightline_flag(self, value: bool) -> None: ...

def classification(self) -> int: ...

def set_classification(self, value: int) -> None: ...

def classification_string(self) -> str: ...

def is_classified_vegetation(self) -> bool: ...

def synthetic(self) -> bool: ...

def set_synthetic(self, value: bool) -> None: ...

def keypoint(self) -> bool: ...

def set_keypoint(self, value: bool) -> None: ...

def withheld(self) -> bool: ...

def set_withheld(self, value: bool) -> None: ...

def overlap(self) -> bool: ...

def set_overlap(self, value: bool) -> None: ...

def is_classified_noise(self) -> bool: ...

def scanner_channel(self) -> int: ...

def set_scanner_channel(self, value: int) -> None: ...

class Lidar

@property
def file_name(self) -> str: ...

@file_name.setter
def file_name(self, value: str): ...

@property
def header(self) -> LidarHeader: ...

@property
def vlr_data(self) -> List[VariableLengthRecord]: ...

@vlr_data.setter
def vlr_data(self, value: List[VariableLengthRecord]): ...

@property
def wkt(self) -> str: ...

@property
def use_point_intensity(self) -> bool: ...

@property
def use_point_userdata(self) -> bool: ...

def get_point_record(self, index: int) -> Tuple[LidarPointData, Optional[float], Optional[ColourData], Optional[WaveformPacket]]: ...

def get_transformed_xyz(self, index: int) -> Tuple[float, float, float]: ...

def add_point(self, point_data: LidarPointData, time: Optional[float] = None, colour_data: Optional[ColourData] = None, waveform_data: Optional[WaveformPacket] = None) -> None: ...

def has_time_data(self) -> bool: ...

def has_colour_data(self) -> bool: ...

def has_waveform_data(self) -> bool: ...

def get_well_known_text(self) -> str: ...

def print_variable_length_records(self) -> str: ...

class PhotometricInterpretation

Continuous: int
Categorical: int
Boolean: int
RGB: int
Paletted: int
Unknown: int

class Point2D

@property
def x(self) -> float: ...

@x.setter
def x(self, value: float) -> None: ...

@property
def y(self) -> float: ...

@y.setter
def y(self, value: float) -> None: ...

@staticmethod
def new(x: float, y: float) -> Point2D: ...

class Point3D

@property
def x(self) -> float: ...

@x.setter
def x(self, value: float) -> None: ...

@property
def y(self) -> float: ...

@y.setter
def y(self, value: float) -> None: ...

@property
def z(self) -> float: ...

@z.setter
def z(self, value: float) -> None: ...

@staticmethod
def new(x: float, y: float, z: float) -> Point2D: ...

class RasterDataType

F64: int
F32: int
I64: int
U64: int
RGB48: int
I32: int
U32: int
RGB24: int
RGBA32: int
I16: int
U16: int
I8: int
U8: int
Unknown: int
def get_data_size(self) -> int: ...

def is_float(self) -> bool: ...

def is_integer(self) -> bool: ...

def is_unsigned_integer(self) -> bool: ...

def is_signed_integer(self) -> bool: ...

def is_colour_data(self) -> bool: ...

def return_wider(self, other: RasterDataType) -> RasterDataType: ...

class RasterConfigs

@property
def title(self) -> str: ...

@title.setter
def title(self, value): ...

@property
def rows(self) -> int: ...

@rows.setter
def rows(self, value: int) -> None: ...

@property
def columns(self) -> int: ...

@columns.setter
def columns(self, value: int) -> None: ...

@property
def nodata(self) -> float: ...

@nodata.setter
def nodata(self, value: float) -> None: ...

@property
def north(self) -> float: ...

@north.setter
def north(self, value: float) -> None: ...

@property
def south(self) -> float: ...

@south.setter
def south(self, value: float) -> None: ...

@property
def east(self) -> float: ...

@east.setter
def east(self, value: float) -> None: ...

@property
def west(self) -> float: ...

@west.setter
def west(self, value: float) -> None: ...

@property
def resolution_x(self) -> float: ...

@resolution_x.setter
def resolution_x(self, value: float) -> None: ...

@property
def resolution_y(self) -> float: ...

@resolution_y.setter
def resolution_y(self, value: float) -> None: ...

@property
def minimum(self) -> float: ...

@minimum.setter
def minimum(self, value: float) -> None: ...

@property
def maximum(self) -> float: ...

@maximum.setter
def maximum(self, value: float) -> None: ...

@property
def palette(self) -> str: ...

@property
def projection(self) -> str: ...

@property
def photometric_interp(self) -> PhotometricInterpretation: ...

@photometric_interp.setter
def photometric_interp(self, value: PhotometricInterpretation) -> None: ...

@property
def data_type(self) -> RasterDataType: ...

@data_type.setter
def data_type(self, value: RasterDataType) -> None: ...

@property
def z_units(self) -> str: ...

@property
def xy_units(self) -> str: ...

@property
def reflect_at_edges(self) -> bool: ...

@reflect_at_edges.setter
def reflect_at_edges(self, value: bool) -> None: ...

@property
def pixel_is_area(self) -> bool: ...

@pixel_is_area.setter
def pixel_is_area(self, value: bool) -> None: ...

@property
def epsg_code(self) -> int: ...

@epsg_code.setter
def epsg_code(self, value: int) -> None: ...

@property
def coordinate_ref_system_wkt(self) -> str: ...

@epsg_code.setter
def coordinate_ref_system_wkt(self, value: str) -> None: ...

@staticmethod
def new() -> RasterConfigs: ...

class RasterType

Unknown: int
ArcAscii: int
ArcBinary: int
EsriBil: int
GeoTiff: int
GrassAscii: int
IdrisiBinary: int
SagaBinary: int
Surfer7Binary: int
SurferAscii: int
Whitebox: int

class Raster

@property
def file_name(self) -> str: ...

@file_name.setter
def file_name(self, value: str) -> None: ...

@property
def file_mode(self) -> str: ...

@property
def raster_type(self) -> RasterType: ...

@property
def configs(self) -> RasterConfigs: ...

@staticmethod
def new_from_other(other: Raster, data_type: Optional[RasterDataType]) -> Raster: ...

def get_value(self, row: int, column: int) -> float: ...

def set_value(self, row: int, column: int, value: float) -> None: ...

def decrement(self, row: int, column: int, value: float) -> None: ...

def increment(self, row: int, column: int, value: float) -> None: ...

def set_row_data(self, row: int, values: List[float]) -> None: ...

def get_row_data(self, row: int) -> List[float]: ...

def increment_row_data(self, row: int, values: List[float]) -> None: ...

def decrement_row_data(self, row: int, values: List[float]) -> None: ...

def set_data_from_raster(self, other: Raster) -> Optional[str]: ...

def reinitialize_values(self, value: float) -> None: ...

def get_value_as_rgba(self, row: int, column: int) -> Tuple[int, int, int, int]: ...

def get_value_as_hsi(self, row: int, column: int) -> Tuple[float, float, float]: ...

def set_value_from_rgba(self, row: int, column: int, rgba: Tuple[int, int, int, int]) -> None: ...

def get_data_size_in_bytes(self) -> int: ...

def get_x_from_column(self, column: int) -> float: ...

def get_y_from_row(self, row: int) -> float: ...

def get_column_from_x(self, x: float) -> int: ...

def get_row_from_y(self, y: float) -> int: ...

def size_of(self) -> int: ...

def __add__(self, other: Union[Raster, float]) -> Raster: ...

def __sub__(self, other: Union[Raster, float]) -> Raster: ...

def __mul__(self, other: Union[Raster, float]) -> Raster: ...

def __truediv__(self, other: Union[Raster, float]) -> Raster: ...

def __floordiv__(self, other: Union[Raster, float]) -> Raster: ...

def __mod__(self, other: Union[Raster, float]) -> Raster: ...

def __pow__(self, other: Union[Raster, float], modulo: Optional[float] = None) -> Raster: ...

def __neg__(self) -> Raster: ...

def __abs__(self) -> Raster: ...

def __iadd__(self, other: Union[Raster, float]) -> None: ...

def __isub__(self, other: Union[Raster, float]) -> None: ...

def __imul__(self, other: Union[Raster, float]) -> None: ...

def __idiv__(self, other: Union[Raster, float]) -> None: ...

def __getitem__(self, row_column: Tuple[int, int]) -> float: ...

def __setitem__(self, row_column: Tuple[int, int], value: float) -> None: ...

def __gt__(self, other: Union[Raster, float]) -> Raster: ...

def __ge__(self, other: Union[Raster, float]) -> Raster: ...

def __lt__(self, other: Union[Raster, float]) -> Raster: ...

def __le__(self, other: Union[Raster, float]) -> Raster: ...

def __eq__(self, other: Union[Raster, float]) -> Raster: ...

def __ne__(self, other: Union[Raster, float]) -> Raster: ...

def acos(self) -> Raster: ...

def acosh(self) -> Raster: ...

def asin(self) -> Raster: ...

def asinh(self) -> Raster: ...

def atan(self) -> Raster: ...

def atan2(self, other: Union[Raster, float]) -> Raster: ...

def atanh(self) -> Raster: ...

def ceil(self) -> Raster: ...

def con(self, con_statement: str, true_raster_or_float: Union[Raster, float, str], false_raster_or_float: Union[Raster, float, str]) -> Raster: ...

def cos(self) -> Raster: ...

def cosh(self) -> Raster: ...

def exp(self) -> Raster: ...

def exp2(self) -> Raster: ...

def floor(self) -> Raster: ...

def is_nodata(self) -> Raster: ...

def ln(self) -> Raster: ...

def log2(self) -> Raster: ...

def log10(self) -> Raster: ...

def max(self, other: Union[Raster, float]) -> Raster: ...

def min(self, other: Union[Raster, float]) -> Raster: ...

def normalize(self) -> Raster: ...

def signum(self) -> Raster: ...

def sin(self) -> Raster: ...

def sinh(self) -> Raster: ...

def sqrt(self) -> Raster: ...

def square(self) -> Raster: ...

def tan(self) -> Raster: ...

def tanh(self) -> Raster: ...

def to_degrees(self) -> Raster: ...

def to_radians(self) -> Raster: ...

def trunc(self) -> Raster: ...

def num_cells(self) -> int: ...

def num_valid_cells(self) -> int: ...

def calculate_mean(self) -> float: ...

def calculate_mean_and_stdev(self) -> Tuple[float, float]: ...

def calculate_clip_values(self, percent: float) -> Tuple[float, float]: ...

def deep_copy(self) -> Raster: ...

def update_min_max(self) -> None: ...

class VariableLengthRecord

@property
def reserved(self) -> int: ...

@property
def user_id(self) -> str: ...

@property
def record_id(self) -> int: ...

@property
def record_length_after_header(self) -> int: ...

@property
def description(self) -> str: ...

@property
def binary_data(self) -> List[int]: ...

class VectorAttributes

@property
def header(self) -> AttributeHeader: ...

@property
def fields(self) -> List[AttributeField]: ...

@property
def is_deleted(self) -> List[bool]: ...

def get_num_fields(self) -> int: ...

class VectorGeometry

@property
def shape_type(self) -> VectorGeometryType: ...

@property
def x_min(self) -> float: ...

@property
def x_max(self) -> float: ...

@property
def y_min(self) -> float: ...

@property
def y_max(self) -> float: ...

@property
def num_parts(self) -> int: ...

@property
def num_points(self) -> int: ...

@property
def parts(self) -> List[int]: ...

@property
def points(self) -> List[Point2D]: ...

@property
def z_min(self) -> float: ...

@property
def z_max(self) -> float: ...

@property
def z_array(self) -> List[float]: ...

@property
def m_min(self) -> float: ...

@property
def m_max(self) -> float: ...

@property
def m_array(self) -> List[float]: ...

@staticmethod
def new_vector_geometry(shape_type: VectorGeometryType) -> VectorGeometry: ...

def add_point(self, p: Point2D) -> None: ...

def add_pointm(self, p: Point2D, m: float) -> None: ...

def add_pointz(self, p: Point2D, m: float, z: float) -> None: ...

def add_geom_part(self, points: List[Point2D]) -> None: ...

def add_geom_partm(self, points: List[Point2D], measures: List[float]) -> None: ...

def add_geom_partz(self, points: List[Point2D], measures: List[float], z_values: List[float]) -> None: ...

def get_bounding_box(self) -> BoundingBox: ...

def get_length(self) -> int: ...

def has_m_data(self) -> bool: ...

def has_z_data(self) -> bool: ...

def is_hole(self, part_num: int) -> bool: ...

class VectorGeometryType

Null: int
Point: int
PolyLine: int
Polygon: int
MultiPoint: int
PointZ: int
PolyLineZ: int
PolygonZ: int
MultiPointZ: int
PointM: int
PolyLineM: int
PolygonM: int
MultiPointM: int

class VectorHeader

@property
def file_length(self) -> int: ...

@property
def version(self) -> int: ...

@property
def shape_type(self) -> VectorGeometryType: ...

@property
def x_min(self) -> float: ...

@property
def y_min(self) -> float: ...

@property
def x_max(self) -> float: ...

@property
def y_max(self) -> float: ...

@property
def z_min(self) -> float: ...

@property
def z_max(self) -> float: ...

@property
def m_min(self) -> float: ...

@property
def m_max(self) -> float: ...

class Vector

@property
def file_name(self) -> str: ...

@file_name.setter
def file_name(self, value: str) -> None: ...

@property
def file_mode(self) -> str: ...

@property
def header(self) -> VectorHeader: ...

@property
def num_records(self) -> int: ...

@property
def records(self) -> List[VectorGeometry]: ...

@property
def attributes(self) -> VectorAttributes: ...

@property
def projection(self) -> str: ...

def __getitem__(self, index: int) -> VectorGeometry: ...

def add_record(self, geometry: VectorGeometry) -> None: ...

def add_attribute_field(self, field: AttributeField) -> None: ...

def add_attribute_fields(self, fields: List[AttributeField]) -> None: ...

def add_attribute_record(self, rec: List[FieldData], deleted: bool) -> None: ...

def contains_attribute_field(self, field: AttributeField) -> bool: ...

def get_attribute_fields(self) -> List[AttributeField]: ...

def get_attribute_record(self, index: int) -> List[FieldData]: ...

def get_attribute_field_info(self, index: int) -> AttributeField: ...

def get_attribute_field_num(self, name: str) -> Optional[int]: ...

def get_num_attributes_fields(self) -> int: ...

def get_attribute_value(self, record_index: int, field_name: str) -> FieldData: ...

def is_attribute_field_numeric(self, index: int) -> bool: ...

def reinitialize_attributes(self) -> None: ...

def set_attribute_value(self, record_index: int, field_name: str, field_data: FieldData) -> None: ...

class WaveformPacket

@property
def packet_descriptor_index(self) -> int: ...

@packet_descriptor_index.setter
def packet_descriptor_index(self, value: int) -> None: ...

@property
def offset_to_waveform_data(self) -> int: ...

@offset_to_waveform_data.setter
def offset_to_waveform_data(self, value: int) -> None: ...

@property
def waveform_packet_size(self) -> int: ...

@waveform_packet_size.setter
def waveform_packet_size(self, value: int) -> None: ...

@property
def ret_point_waveform_loc(self) -> float: ...

@ret_point_waveform_loc.setter
def ret_point_waveform_loc(self, value: float) -> None: ...

@property
def xt(self) -> float: ...

@xt.setter
def xt(self, value: float) -> None: ...

@property
def yt(self) -> float: ...

@yt.setter
def yt(self, value: float) -> None: ...

@property
def zt(self) -> float: ...

@zt.setter
def zt(self, value: float) -> None: ...

class WbEnvironment

@property
def max_procs(self) -> int: ...

@max_procs.setter
def max_procs(self, value: int) -> None: ...

@property
def verbose(self) -> bool: ...

@verbose.setter
def verbose(self, value: bool) -> None: ...

@property
def working_directory(self) -> str: ...

@working_directory.setter
def working_directory(self, value: str) -> None: ...

def __new__(cls, user_id: str = "") -> WbEnvironment: ...

def version(self) -> str: ...

def read_lidar(self, file_name: str, file_mode: str = "r") -> Lidar: ...

def read_lidars(self, file_names: List[str]) -> List[Lidar]: ...

def new_lidar(self, header: LidarHeader) -> Lidar: ...

def write_lidar(self, lidar: Lidar, file_name: str) -> None: ...

def read_raster(self, file_name: str) -> Raster: ...

def read_rasters(self, file_name: List[str]) -> List[Raster]: ...

def new_raster(self, configs: RasterConfigs) -> Raster: ...

def write_raster(self, raster: Raster, file_name: str, compress: bool = False) -> None: ...

def read_vector(self, file_name: str) -> Vector: ...

def read_vectors(self, file_name: List[str]) -> List[Vector]: ...

def new_vector(self,  shape_type: VectorGeometryType, attributes: Optional[List[AttributeField]] = None, proj: str = "") -> Vector: ...

def write_vector(self, vector: Vector, file_name: str) -> None: ...

def write_text(self, text: str, file_name: str) -> None: ...

# Requires WbW-Pro license
def accumulation_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def adaptive_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, threshold: float = 2.0) -> Raster: ...

def add_point_coordinates_to_table(self, input: Vector) -> Vector: ...

def aggregate_raster(self, raster: Raster, aggregation_factor: int = 2, aggregation_type: str = "mean") -> Raster: ...

def anova(self, input_raster: Raster, features_raster: Raster, output_html_file: str) -> None: ...

def ascii_to_las(self, input_ascii_files: List[str], pattern: str, epsg_code: int) -> None: ...

def aspect(self, dem: Raster, z_factor: float = 1.0) -> Raster: ...

# Requires WbW-Pro license
def assess_route(self, routes: Vector, dem: Raster, segment_length: float = 100.0, search_radius: int = 15) -> Vector: ...

def attribute_correlation(self, input: Vector, output_html_file: str) -> None: ...

def attribute_histogram(self, input: Vector, field_name: str, output_html_file: str) -> None: ...

def attribute_scattergram(self, input: Vector, field_name_x: str, field_name_y: str, output_html_file: str, add_trendline: bool = False) -> None: ...

def average_flowpath_slope(self, dem: Raster) -> Raster: ...

# Requires WbW-Pro license
def average_horizon_distance(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), observer_hgt_offset: float = 0.05) -> Raster: ...

def average_normal_vector_angular_deviation(self, dem: Raster, filter_size: int = 11) -> Raster: ...

def average_overlay(self, input_rasters: List[Raster]) -> Raster: ...

def average_upslope_flowpath_length(self, dem: Raster) -> Raster: ...

def balance_contrast_enhancement(self, image: Raster, band_mean: float = 100.0) -> Raster: ...

def basins(self, d8_pntr: Raster, esri_pntr: bool = False) -> Raster: ...

def bilateral_filter(self, raster: Raster, sigma_dist: float = 0.75, sigma_int: float = 1.0) -> Raster: ...

def block_maximum(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None) -> Raster: ...

def block_minimum(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None) -> Raster: ...

def bool_and(self, input1: Raster, input2: Raster) -> Raster: ...

def bool_not(self, input1: Raster, input2: Raster) -> Raster: ...

def bool_or(self, input1: Raster, input2: Raster) -> Raster: ...

def bool_xor(self, input1: Raster, input2: Raster) -> Raster: ...

def boundary_shape_complexity(self, raster: Raster) -> Raster: ...

def breach_depressions_least_cost(self, dem: Raster, max_cost: float = float('inf'), max_dist: int = 100, flat_increment: float = float('nan'), fill_deps: bool = False, minimize_dist: bool = False) -> Raster: ...

def breach_single_cell_pits(self, dem: Raster) -> Raster: ...

# Requires WbW-Pro license
def breakline_mapping(self, dem: Raster, threshold: float = 0.8, min_length: int = 3) -> Vector: ...

def buffer_raster(self, input: Raster, buffer_size: float, grid_cells_units: bool = False) -> Raster: ...

def burn_streams_at_roads(self, dem: Raster, streams: Vector, roads: Vector, road_width: float) -> Raster: ...

# Requires WbW-Pro license
def canny_edge_detection(self, input: Raster, sigma: float = 0.5, low_threshold: float = 0.05, high_threshold: float = 0.15, add_back_to_image: bool = False) -> Raster: ...

def centroid_raster(self, input: Raster) -> Tuple[Raster, str]: ...

def centroid_vector(self, input: Vector) -> Vector: ...

def change_vector_analysis(self, date1_rasters: List[Raster], date2_rasters: List[Raster]) -> Tuple[Raster, Raster, str]: ...

def check_in_license(self, key: str) -> str: ...

def circular_variance_of_aspect(self, dem: Raster, filter_size: int = 11) -> Raster: ...

def classify_buildings_in_lidar(self, in_lidar: Lidar, building_footprints: Vector) -> Lidar: ...

# Requires WbW-Pro license
def classify_lidar(self, input_lidar: Optional[Lidar], search_radius: float = 2.5, grd_threshold: float = 0.1, oto_threshold: float = 1.0, linearity_threshold: float = 0.5, planarity_threshold: float = 0.85, num_iter: int = 30, facade_threshold: float = 0.5) -> Optional[Lidar]: ...

# Requires WbW-Pro license
def colourize_based_on_class(self, input_lidar: Optional[Lidar], intensity_blending_amount: float = 50.0, clr_str: str = "", use_unique_clrs_for_buildings: bool = False, search_radius: float = 2.0) -> Optional[Lidar]: ...

# Requires WbW-Pro license
def colourize_based_on_point_returns(self, input_lidar: Optional[Lidar], intensity_blending_amount: float = 50.0, only_ret_colour: str = "(230,214,170)", first_ret_colour:str = "(0,140,0)", intermediate_ret_colour: str = "(255,0,255)", last_ret_colour: str = "(0,0,255)") -> Optional[Lidar]: ...

def classify_overlap_points(self, in_lidar: Lidar, resolution: float = 1.0, overlap_criterion: str = "max scan angle", filter: bool = False) -> Lidar: ...

def clean_vector(self, input: Vector) -> Vector: ...

def clip(self, input: Vector, clip_layer: Vector) -> Vector: ...

def clip_lidar_to_polygon(self, input: Lidar, polygons: Vector) -> Lidar: ...

def clip_raster_to_polygon(self, raster: Raster, polygons: Vector, maintain_dimensions: bool = False) -> Raster: ...

def closing(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def clump(self, raster: Raster, diag: bool = False, zero_background: bool = False) -> Raster: ...

def compactness_ratio(self, input: Vector) -> Vector: ...

def conservative_smoothing_filter(self, raster: Raster, filter_size_x: int = 3, filter_size_y: int = 3) -> Raster: ...

def construct_vector_tin(self, input_points: Vector, field_name: str = "FID", use_z: bool = False, max_triangle_edge_length: float = float('inf')) -> Vector: ...

def contours_from_points(self, input: Vector, field_name: str = "", use_z_values: bool = False, max_triangle_edge_length: float = float('inf'), contour_interval: float = 10.0, base_contour: float = 0.0, smoothing_filter_size: int = 9) -> Vector: ...

def contours_from_raster(self, raster_surface: Raster, contour_interval: float = 10.0, base_contour: float = 0.0, smoothing_filter_size: int = 9, deflection_tolerance: float = 10.0) -> Vector: ...

def convert_nodata_to_zero(self, raster: Raster) -> Raster: ...

def corner_detection(self, raster: Raster) -> Raster: ...

def correct_vignetting(self, image: Raster, principal_point: Vector, focal_length: float = 304.8, image_width: float = 228.6, n_param: float = 4.0) -> Raster: ...

def cost_allocation(self, source: Raster, backlink: Raster) -> Raster: ...

def cost_distance(self, source: Raster, cost: Raster) -> Tuple[Raster, Raster]: ...

def cost_pathway(self, destination: Raster, backlink: Raster, zero_background: bool = False) -> Raster: ...

def count_if(self, input_rasters: List[Raster], comparison_value: float) -> Raster: ...

def create_colour_composite(self, red: Raster, green: Raster, blue: Raster, opacity: Optional[Raster] = None, enhance: bool = True, treat_zeros_as_nodata: bool = False) -> Raster: ...

def create_plane(self, base_file: Raster, gradient: float, aspect: float, constant: float) -> Raster: ...

def crispness_index(self, raster: Raster, output_html_file: str) -> None: ...

def cross_tabulation(self, raster1: Raster, raster2: Raster, output_html_file: str) -> None: ...

def csv_points_to_vector(self, input_file: str, x_field_num: int = 0, y_field_num: int = 1, epsg: int = 0) -> Vector: ...

def cumulative_distribution(self, raster: Raster) -> Raster: ...

# Requires WbW-Pro license
def curvedness(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def d8_flow_accum(self, raster: Raster, out_type: str = "SCA", log_transform: bool = False, clip: bool = False, input_is_pointer: bool = False, esri_pntr: bool = False) -> Raster: ...

def d8_mass_flux(self, dem: Raster, loading: Raster, efficiency: Raster, absorption: Raster) -> Raster: ...

def d8_pointer(self, dem: Raster, esri_pointer: bool = False) -> Raster: ...

# Requires WbW-Pro license
def dbscan(self, input_rasters: List[Raster], scaling_method: str = "none", search_distance: float = 1.0, min_points: int = 5) -> Raster: ...

# Requires WbW-Pro license
def dem_void_filling(self, dem: Raster, fill: Raster, mean_plane_dist: int = 20, edge_treatment: str = "use DEM", weight_value: float = 2.0) -> Raster: ...

def depth_in_sink(self, dem: Raster, zero_background: bool = False) -> Raster: ...

# Requires WbW-Pro license
def depth_to_water(self, dem: Raster, streams: Optional[Vector] = None, lakes: Optional[Vector] = None) -> Raster: ...

def deviation_from_mean_elevation(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def deviation_from_regional_direction(self, input: Vector, elongation_threshold: float = 0.75) -> Vector: ...

def diff_of_gaussians_filter(self, raster: Raster, sigma1: float = 2.0, sigma2: float = 4.0) -> Raster: ...

def difference(self, input: Vector, overlay: Vector) -> Vector: ...

# Requires WbW-Pro license
def difference_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def difference_from_mean_elevation(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def dinf_flow_accum(self, dem: Raster, out_type: str = "SCA", convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False, input_is_pointer: bool = False) -> Raster: ...

def dinf_mass_flux(self, dem: Raster, loading: Raster, efficiency: Raster, absorption: Raster) -> Raster: ...

def dinf_pointer(self, dem: Raster) -> Raster: ...

def direct_decorrelation_stretch(self, image: Raster, achromatic_factor: float = 0.5, clip_percent: float = 1.0) -> Raster: ...

def directional_relief(self, dem: Raster, azimuth: float = 0.0, max_dist: float = float('inf')) -> Raster: ...

def dissolve(self, input: Vector, dissolve_field: str = "", snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

def distance_to_outlet(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def diversity_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def downslope_distance_to_stream(self, dem: Raster, streams: Raster, use_dinf: bool = False) -> Raster: ...

def downslope_flowpath_length(self, d8_pointer: Raster, watersheds: Raster, weights: Raster, esri_pntr: bool = False) -> Raster: ...

def downslope_index(self, dem: Raster, vertical_drop: float, output_type: str = "tangent") -> Raster: ...

def edge_contamination(self, dem: Raster, flow_type: str = "mfd", z_factor: float = -1.0) -> Raster: ...

def edge_density(self, dem: Raster, filter_size: int = 11, normal_diff_threshold: float = 5.0, z_factor: float = 1.0) -> Raster: ...

def edge_preserving_mean_filter(self, raster: Raster, filter_size: int = 11, threshold: float = 15.0) -> Raster: ...

def edge_proportion(self, raster: Raster) -> Tuple[Raster, str]: ...

def elev_relative_to_min_max(self, dem: Raster) -> Raster: ...

def elev_relative_to_watershed_min_max(self, dem: Raster, watersheds: Raster) -> Raster: ...

def elevation_above_pit(self, dem: Raster) -> Raster: ...

def elevation_above_stream(self, dem: Raster, streams: Raster) -> Raster: ...

def elevation_above_stream_euclidean(self, dem: Raster, streams: Raster) -> Raster: ...

def elevation_percentile(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

def eliminate_coincident_points(self, input: Vector, tolerance_dist: float) -> Vector: ...

def elongation_ratio(self, input: Vector) -> Vector: ...

def embankment_mapping(self, dem: Raster, roads_vector: Vector, search_dist: float = 2.5, min_road_width: float = 6.0, typical_embankment_width: float = 30.0, typical_embankment_max_height: float = 2.0, embankment_max_width: float = 60.0, max_upwards_increment: float = 0.05, spillout_slope: float = 4.0, remove_embankments: bool = False) -> Tuple[Raster, Optional[Raster]]: ...

def emboss_filter(self, raster: Raster, direction: str = "n", clip_amount: float = 0.0) -> Raster: ...

def erase(self, input: Vector, erase_layer: Vector) -> Vector: ...

def erase_polygon_from_lidar(self, input: Lidar, polygons: Vector) -> Lidar: ...

def erase_polygon_from_raster(self, raster: Raster, polygons: Vector) -> Raster: ...

def euclidean_allocation(self, input: Raster) -> Raster: ...

def euclidean_distance(self, input: Raster) -> Raster: ...

# Requires WbW-Pro license
def evaluate_training_sites(self, input_rasters: List[Raster], training_polygons: Vector, class_field_name: str, output_html_file: str) -> None: ...

def export_table_to_csv(self, input: Vector, output_csv_file: str, headers: bool = True) -> None: ...

def exposure_towards_wind_flux(self, dem: Raster, azimuth: float = 0.0, max_dist: float = float('inf'), z_factor: float = 1.0) -> Raster: ...

def extend_vector_lines(self, input: Vector, distance: float, extend_direction: str = "both ends") -> Vector: ...

def extract_by_attribute(self, input: Vector, statement: str) -> Vector: ...

def extract_nodes(self, input: Vector) -> Vector: ...

def extract_raster_values_at_points(self, rasters: List[Raster], points: Vector) -> Tuple[Vector, str]: ...

def extract_streams(self, flow_accumulation: Raster, threshold: float = 0.0, zero_background: bool = False) -> Raster: ...

def extract_valleys(self, dem: Raster, variant: str = "LQ", line_thin: bool = False, filter_size: int = 5) -> Raster: ...

def farthest_channel_head(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def fast_almost_gaussian_filter(self, raster: Raster, sigma: float = 1.8) -> Raster: ...

def fd8_flow_accum(self, dem: Raster, out_type: str = "SCA", exponent: float = 1.1, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

def fd8_pointer(self, dem: Raster) -> Raster: ...

def feature_preserving_smoothing(self, dem: Raster, filter_size: int = 11, normal_diff_threshold: float = 8.0, iterations: int = 3, max_elevation_diff: float = float('inf'), z_factor: float = 1.0) -> Raster: ...

def fetch_analysis(self, dem: Raster, azimuth: float = 0.0, height_increment: float = 0.05) -> Raster: ...

def fill_burn(self, dem: Raster, streams: Vector) -> Raster: ...

def fill_depressions(self, dem: Raster, fix_flats: bool = True, flat_increment: float = float('nan'), max_depth: float = float('inf')) -> Raster: ...

def fill_depressions_planchon_and_darboux(self, dem: Raster, fix_flats: bool = True, flat_increment: float = float('nan')) -> Raster: ...

def fill_depressions_wang_and_liu(self, dem: Raster, fix_flats: bool = True, flat_increment: float = float('nan')) -> Raster: ...

def fill_missing_data(self, dem: Raster, filter_size: int = 11, weight: float = 2.0, exclude_edge_nodata: bool = False) -> Raster: ...

def fill_pits(self, dem: Raster) -> Raster: ...

# Requires WbW-Pro license
def filter_lidar(self, statement: str, input_lidar: Optional[Lidar]) -> Optional[Lidar]: ...

# Requires WbW-Pro license
def filter_lidar_by_percentile(self, input_lidar: Optional[Lidar],  percentile: float = 0.0, block_size: float = 1.0) -> Optional[Lidar]: ...

# Requires WbW-Pro license
def filter_lidar_by_reference_surface(self, input_lidar: Lidar, ref_surface: Raster, query: str = "within", threshold: float = 0.0) -> Lidar: ...

def filter_lidar_classes(self, input: Lidar, exclusion_classes: List[int]) -> Lidar: ...

def filter_lidar_scan_angles(self, in_lidar: Lidar, threshold: int) -> Lidar: ...

def filter_raster_features_by_area(self, input: Raster, threshold: int, zero_background: bool = False) -> Raster: ...

def find_flightline_edge_points(self, in_lidar: Lidar) -> Lidar: ...

def find_lowest_or_highest_points(self, raster: Raster, output_type: str = "lowest") -> Vector: ...

def find_main_stem(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def find_noflow_cells(self, dem: Raster) -> Raster: ...

def find_parallel_flow(self, d8_pntr: Raster, streams: Raster) -> Raster: ...

def find_patch_edge_cells(self, raster: Raster) -> Raster: ...

def find_ridges(self, dem: Raster, line_thin: bool = True) -> Raster: ...

# Requires WbW-Pro license
def fix_dangling_arcs(self, input: Vector, snap_dist: float) -> Vector: ...

def flatten_lakes(self, dem: Raster, lakes: Vector) -> Raster: ...

def flightline_overlap(self, input_lidar: Lidar, resolution: float = 1.0) -> Raster: ...

def flip_image(self, raster: Raster, direction: str = "vertical") -> Raster: ...

def flood_order(self, dem: Raster) -> Raster: ...

def flow_accum_full_workflow(self, dem: Raster, out_type: str = "SCA", log_transform: bool = False, clip: bool = False, esri_pntr: bool = False) -> Tuple[Raster, Raster, Raster]: ...

def flow_length_diff(self, d8_pointer: Raster, esri_pointer: bool = False, log_transform: bool = False) -> Raster: ...

def gamma_correction(self, raster: Raster, gamma_value: float = 0.5) -> Raster: ...

def gaussian_contrast_stretch(self, raster: Raster, num_tones: int = 256) -> Raster: ...

def gaussian_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def gaussian_filter(self, raster: Raster, sigma: float = 0.75) -> Raster: ...

def geomorphons(self, dem: Raster, search_distance: int = 50, flatness_threshold: float = 0.0, flatness_distance: int = 0, skip_distance: int = 0, output_forms: bool = True, analyze_residuals: bool = False) -> Raster: ...

# Requires WbW-Pro license
def generalize_classified_raster(self, raster: Raster, area_threshold: int = 5, method: str = "longest") -> Raster: ...

# Requires WbW-Pro license
def generalize_with_similarity(self, raster: Raster, similarity_rasters: List[Raster], area_threshold: int = 5) -> Raster: ...

# Requires WbW-Pro license
def generating_function(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def hack_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def heat_map(self, points: Vector, field_name: Optional[str] = None, bandwidth: float = 0.0, cell_size: float = 0.0, base_raster: Optional[Raster] = None, kernel_function: str = "quartic") -> Raster: ...

def height_above_ground(self, input: Lidar) -> Lidar: ...

def hexagonal_grid_from_raster_base(self, base: Raster, width: float, orientation: str = "h") -> Vector: ...

def hexagonal_grid_from_vector_base(self, base: Vector, width: float, orientation: str = "h") -> Vector: ...

def high_pass_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def high_pass_median_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

def highest_position(self, input_rasters: List[Raster]) -> Raster: ...

def hillshade(self, dem: Raster, azimuth: float = 315.0, altitude: float = 30.0, z_factor: float = 1.0) -> Raster: ...

def hillslopes(self, d8_pntr: Raster, streams: Raster, esri_pntr: bool = False) -> Raster: ...

def histogram_equalization(self, raster: Raster, num_tones: int = 256) -> Raster: ...

def histogram_matching(self, image: Raster, histogram: List[List[float]], histo_is_cumulative: bool = False) -> Raster: ...

def histogram_matching_two_images(self, image1: Raster, image2: Raster) -> Raster: ...

def hole_proportion(self, input: Vector) -> Vector: ...

def horizon_angle(self, dem: Raster, azimuth: float = 0.0, max_dist: float = float('inf')) -> Raster: ...

# Requires WbW-Pro license
def horizon_area(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), observer_hgt_offset: float = 0.05) -> Raster: ...

# Requires WbW-Pro license
def horizontal_excess_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def horton_ratios(self, dem: Raster, streams_raster: Raster) -> Tuple[float, float, float, float]: ...

def horton_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

# Requires WbW-Pro license
def hydrologic_connectivity(self, dem: Raster, exponent: float = 1.1, convergence_threshold: float = 0.0, z_factor: float = 1.0 ) -> Tuple[Raster, Raster]: ...

def hypsometric_analysis(self, dem_rasters: List[Raster], output_html_file: str, watershed_rasters: Optional[List[Raster]] = None) -> None: ...

def hypsometrically_tinted_hillshade(self, dem: Raster, solar_altitude: float = 45.0, hillshade_weight: float = 0.5, brightness: float = 0.5, atmospheric_effects: float = 0.0, palette: WbPalette = WbPalette.Atlas, reverse_palette: bool = False, full_360_mode: bool = False, z_factor: float = 1.0) -> Raster: ...

def idw_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, weight: float = 2.0, radius: float = 0.0, min_points: int = 0, cell_size: float = 0.0, base_raster: Optional[Raster] = None) -> Raster: ...

def ihs_to_rgb(self, intensity: Raster, hue: Raster, saturation: Raster) -> Tuple[Raster, Raster, Raster]: ...

def image_autocorrelation(self, rasters: List[Raster], output_html_file: str, contiguity_type: str = "bishop") -> None: ...

def image_correlation(self, rasters: List[Raster], output_html_file: str) -> None: ...

def image_correlation_neighbourhood_analysis(self, raster1: Raster, raster2: Raster, filter_size: int = 11, correlation_stat: str = "pearson") -> Tuple[Raster, Raster]: ...

def image_regression(self, independent_variable: Raster, dependent_variable: Raster, output_html_file: str, standardize_residuals: bool = False, output_scattergram: bool = False, num_samples: int = 1000) -> Raster: ...

# Requires WbW-Pro license
def image_segmentation(self, input_rasters: List[Raster], dist_threshold: float = 0.5, num_steps: int = 10, area_threshold: int = 4) -> Raster: ...

# Requires WbW-Pro license
def image_slider(self, left_raster: Raster, right_raster: Raster, output_html_file: str, left_palette: WbPalette = WbPalette.Grey, left_reverse_palette: bool = False, left_label: str = "",  right_palette: WbPalette = WbPalette.Grey, right_reverse_palette: bool = False, right_label: str = "", image_height: int = 600) -> None: ...

def image_stack_profile(self, images: List[Raster], points: Vector, output_html_file: str) -> None: ...

def impoundment_size_index(self, dem: Raster, max_dam_length: float, output_mean: bool = False, output_max: bool = False, output_volume: bool = False, output_area: bool = False, output_height: bool = False) -> Tuple[Optional[Raster], Optional[Raster], Optional[Raster], Optional[Raster], Optional[Raster]]: ...

# Requires WbW-Pro license
def improved_ground_point_filter(self, input: Lidar, block_size = 1.0, max_building_size = 150.0, slope_threshold = 15.0, elev_threshold = 0.15) -> Lidar: ...

def individual_tree_detection(self, input_lidar: Optional[Lidar],  min_search_radius: float = 1.0, min_height: float = 0.0, max_search_radius: Optional[float] = None, max_height: Optional[float] = None, only_use_veg: bool = False) -> Optional[Vector]: ...

def insert_dams(self, dem: Raster, dam_points: Vector, dam_length: float) -> Raster: ...

def isobasins(self, dem: Raster, target_size: float, connections: bool = False, csv_file: str = "" ) -> Raster: ...

def integral_image_transform(self, raster: Raster) -> Raster: ...

def intersect(self, input: Vector, overlay: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

# Requires WbW-Pro license
def inverse_pca(self, rasters: List[Raster], pca_report_file: str) -> List[Raster]: ...

def jenson_snap_pour_points(self, pour_pts: Vector, streams: Raster, snap_dist: float = 0.0) -> Vector: ...

def join_tables(self, primary_vector: Vector, primary_key_field: str, foreign_vector: Vector, foreign_key_field: str, import_field: str = "") -> None: ...

def k_means_clustering(self, input_rasters: List[Raster], output_html_file: str = "", num_clusters: int = 5, max_iterations: int = 10, percent_changed_threshold: float = 2.0, initialization_mode: str = "diagonal", min_class_size: int = 10) -> Raster: ...

def k_nearest_mean_filter(self, raster: Raster, filter_size_x: int = 3, filter_size_y: int = 3, k: int = 5) -> Raster: ...

def kappa_index(self, class_raster: Raster, reference_raster: Raster, output_html_file: str = "") -> None: ...

# Requires WbW-Pro license
def knn_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, scaling_method: str = "none", k: int = 5, test_proportion: float = 0.2, use_clipping: bool = False, create_output: bool = False) -> Optional[Raster]: ...

# Requires WbW-Pro license
def knn_regression(self, input_rasters: List[Raster], training_data: Vector, field_name: str, scaling_method: str = "none", k: int = 5, distance_weighting: bool = False, test_proportion: float = 0.2, create_output: bool = False) -> Optional[Raster]: ...

def ks_normality_test(self, raster: Raster, output_html_file: str, num_samples: int) -> None: ...

def laplacian_filter(self, raster: Raster, variant: str = "3x3(1)", clip_amount: float = 0.0) -> Raster: ...

def laplacian_of_gaussians_filter(self, raster: Raster, sigma: float = 0.75) -> Raster: ...

def las_to_ascii(self, input_lidar: Optional[Lidar]) -> None: ...

def las_to_shapefile(self, input_lidar: Optional[Lidar], output_multipoint: bool = False) -> Vector: ...

def layer_footprint_raster(self, input: Raster) -> Vector: ...

def layer_footprint_vector(self, input: Vector) -> Vector: ...

def lee_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sigma: float = 10.0, m_value: float = 5.0) -> Raster: ...

def length_of_upstream_channels(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def license_type(self) -> LicenseType: ...

def lidar_block_maximum(self, input_lidar: Optional[Lidar], cell_size: float = 1.0) -> Raster: ...

def lidar_block_minimum(self, input_lidar: Optional[Lidar], cell_size: float = 1.0) -> Raster: ...

def lidar_classify_subset(self, base_lidar: Lidar, subset_lidar: Lidar, subset_class_value: int, nonsubset_class_value: int) -> Lidar: ...

def lidar_colourize(self, in_lidar: Lidar, in_image: Raster) -> Lidar: ...

def lidar_construct_vector_tin(self, input_lidar: Optional[Lidar], returns_included: str = "all", excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), max_triangle_edge_length: float = float('inf')) -> Vector: ...

# Requires WbW-Pro license
def lidar_contour(self, input_lidar: Optional[Lidar], contour_interval: float = 10.0, base_contour: float = 0.0, smooth: int = 5, interpolation_parameter: str = "elevation", returns_included: str = "all",  excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), tile_overlap: float = 0.0, max_triangle_edge_length: float = float('inf')) -> Optional[Vector]: ...

def lidar_digital_surface_model(self, input_lidar: Optional[Lidar], cell_size: float = 1.0, search_radius: float = 0.5, min_elev: float = float('-inf'), max_elev: float = float('inf'), max_triangle_edge_length: float = float('inf')) -> Raster: ...

# Requires WbW-Pro license
def lidar_eigenvalue_features(self, input_lidar: Optional[Lidar], num_neighbours: Optional[int], search_radius: Optional[float]) -> None: ...

def lidar_elevation_slice(self, input: Lidar, minz: float = float('-inf'), maxz: float = float('inf'), classify: bool = False, in_class_value: int = 2, out_class_value: int = 1) -> Lidar: ...

def lidar_ground_point_filter(self, input_lidar: Optional[Lidar], search_radius: float = 2.0, min_neighbours: int = 0, slope_threshold: float = 45.0, height_threshold: float = 1.0, classify: bool = False, slope_norm: bool = True, height_above_ground: bool = False) -> Lidar: ...

def lidar_hex_bin(self, input_lidar: Lidar, width: float, orientation: str = "h") -> Vector: ...

def lidar_hillshade(self, input: Lidar, search_radius: float = -1.0, azimuth: float = 315.0, altitude: float = 30.0) -> Lidar: ...

def lidar_histogram(self, input_lidar: Lidar, output_html_file: str, parameter: str = "elevation", clip_percent: float = 1.0) -> None: ...

def lidar_idw_interpolation(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation",  returns_included: str = "all", cell_size: float = 1.0,  idw_weight: float = 1.0, search_radius: float = 2.5, excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Raster: ...

def lidar_info(self, input_lidar: Lidar, output_html_file: Optional[str], show_point_density: bool = True, show_vlrs: bool = True, show_geokeys: bool = True) -> str: ...

def lidar_join(self, inputs: List[Lidar]) -> Lidar: ...

def lidar_kappa(self, input_lidar1: Lidar, input_lidar2: Lidar, output_html_file: str, cell_size: float = 1.0, output_class_accuracy: bool = False) -> Raster: ...

def lidar_nearest_neighbour_gridding(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, search_radius: float = 2.5, excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Raster: ...

def lidar_point_density(self, input_lidar: Optional[Lidar], returns_included: str = "all", cell_size: float = 1.0, search_radius: float = 2.5, excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Raster: ...

# Requires WbW-Pro license
def lidar_point_return_analysis(self, input: Lidar, create_output: bool = False) -> Optional[Lidar]: ...

def lidar_point_stats(self, input_lidar: Optional[Lidar], cell_size: float = 1.0, num_points: bool = False, num_pulses: bool = False, avg_points_per_pulse: bool = False, z_range: bool = False, intensity_range: bool = False, predominant_class: bool = False) : ...

def lidar_radial_basis_function_interpolation(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, num_points: int = 15, excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), func_type: str = "ThinPlateSpline", poly_order: str = "none", weight: float = 0.1) -> Raster: ...

def lidar_ransac_planes(self, in_lidar: Lidar, search_radius: float = 2.0, num_iterations: int = 50, num_samples: int = 10, inlier_threshold: float = 0.15, acceptable_model_size: int = 30, max_planar_slope: float = 75.0, classify: bool = False, only_last_returns: bool = False) -> Lidar: ...

def lidar_remove_outliers(self, input: Lidar, search_radius: float = 2.0, elev_diff: float = 50.0, use_median: bool = False, classify: bool = False) -> Lidar: ...

def lidar_rooftop_analysis(self, lidar_inputs: List[Lidar], building_footprints: Vector, search_radius: float = 2.0, num_iterations: int = 50, num_samples: int = 10, inlier_threshold: float = 0.15, acceptable_model_size: int = 30, max_planar_slope: float = 75.0, norm_diff_threshold: float = 2.0, azimuth: float = 180.0, altitude: float = 30.0) -> Vector: ...

def lidar_segmentation(self, in_lidar: Lidar, search_radius: float = 2.0, num_iterations: int = 50, num_samples: int = 10, inlier_threshold: float = 0.15, acceptable_model_size: int = 30, max_planar_slope: float = 75.0, norm_diff_threshold: float = 2.0, max_z_diff: float = 1.0, classes: bool = False, ground: bool = False) -> Lidar: ...

def lidar_segmentation_based_filter(self, in_lidar: Lidar, search_radius: float = 5.0, norm_diff_threshold: float = 2.0, max_z_diff: float = 1.0, classify_points: bool = False) -> Lidar: ...

def lidar_shift(self, input: Lidar, x_shift: float = 0.0, y_shift: float = 0.0, z_shift: float = 0.0) -> Lidar: ...

# Requires WbW-Pro license
def lidar_sibson_interpolation(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", resolution: float = 1.0, returns_included: str = "all", excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Optional[Raster]: ...

def lidar_thin(self, input: Lidar, resolution: float = 1.0, selection_method: str = "first", save_filtered: bool = False) -> Tuple[Lidar, Optional[Lidar]]: ...

def lidar_thin_high_density(self, input: Lidar, density: float, resolution: float = 1.0, save_filtered: bool = False) -> Tuple[Lidar, Optional[Lidar]]: ...

def lidar_tile(self, input_lidar: Lidar, tile_width: float = 1000.0, tile_height: float = 1000.0, origin_x: float = 0.0, origin_y: float = 0.0, min_points_in_tile: int = 2, output_laz_format: bool = True) -> None: ...

def lidar_tile_footprint(self, input_lidar: Optional[Lidar], output_hulls: bool = False) -> Vector: ...

def lidar_tin_gridding(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, excluded_classes: Optional[List[int]] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), max_triangle_edge_length: float = float('inf')) -> Raster: ...

def lidar_tophat_transform(self, input: Lidar, search_radius: float) -> Lidar: ...

def line_detection_filter(self, raster: Raster, variant: str = "vertical", abs_values: bool = False, clip_tails: float = 0.0) -> Raster: ...

def line_intersections(self, input1: Vector, input2: Vector) -> Vector: ...

def line_thinning(self, raster: Raster) -> Raster: ...

def linearity_index(self, input: Vector) -> Vector: ...

def lines_to_polygons(self, input: Vector) -> Vector: ...

def list_unique_values(self, input: Vector, field_name: str) -> List[Tuple[str, int]]: ...

def list_unique_values_raster(self, raster: Raster) -> str: ...

# Requires WbW-Pro license
def local_hypsometric_analysis(self, dem: Raster, min_scale: int = 4, step_size: int = 1,  num_steps: int = 10, step_nonlinearity: float = 1.0) -> Tuple[Raster, Raster]: ...

# Requires WbW-Pro license
def logistic_regression(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, scaling_method: str = "none", test_proportion: float = 0.2, create_output: bool = False) -> Optional[Raster]: ...

def long_profile(self, d8_pointer: Raster, streams_raster: Raster, dem: Raster, output_html_file: str, esri_pointer: bool = False) -> None: ...

def long_profile_from_points(self, d8_pointer: Raster, points: Vector, dem: Raster, output_html_file: str, esri_pointer: bool = False) -> None: ...

def longest_flowpath(self, dem: Raster, basins: Raster) -> Vector: ...

def lowest_position(self, input_rasters: List[Raster]) -> Raster: ...

# Requires WbW-Pro license
def low_points_on_headwater_divides(self, dem: Raster, streams: Raster) -> Vector: ...

def majority_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def map_off_terrain_objects(self, dem: Raster, max_slope: float = float('inf'), min_feature_size: int = 0) -> Raster: ...

def max_absolute_overlay(self, input_rasters: List[Raster]) -> Raster: ...

def max_anisotropy_dev(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

def max_anisotropy_dev_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> None: ...

def max_branch_length(self, dem: Raster, log_transform: bool = False) -> Raster: ...

def max_difference_from_mean(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

def max_downslope_elev_change(self, raster: Raster) -> Raster: ...

def max_elevation_dev_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> None: ...

def max_elevation_deviation(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

def max_overlay(self, input_rasters: List[Raster]) -> Raster: ...

def max_upslope_elev_change(self, raster: Raster) -> Raster: ...

def max_upslope_flowpath_length(self, dem: Raster) -> Raster: ...

def max_upslope_value(self, dem: Raster, values_raster: Raster) -> Raster: ...

def maximal_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def maximum_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def mdinf_flow_accum(self, dem: Raster, out_type: str = "SCA", exponent: float = 1.1, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

def mean_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def mean_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def median_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

def medoid(self, input: Vector) -> Vector: ...

def merge_line_segments(self, input: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

def merge_table_with_csv(self, primary_vector: Vector, primary_key_field: str, foreign_csv_filename: str, foreign_key_field: str, import_field: str = "") -> None: ...

def merge_vectors(self, input_vectors: List[Vector]) -> Vector: ...

def min_absolute_overlay(self, input_rasters: List[Raster]) -> Raster: ...

# Requires WbW-Pro license
def min_dist_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, dist_threshold: float = float('inf')) -> Raster: ...

def min_downslope_elev_change(self, raster: Raster) -> Raster: ...

def min_max_contrast_stretch(self, raster: Raster, min_val: float, max_val: float, num_tones: int = 256) -> Raster: ...

def min_overlay(self, input_rasters: List[Raster]) -> Raster: ...

def minimal_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def minimum_bounding_box(self, input: Vector, min_criteria: str = "area", individual_feature_hulls: bool = True) -> Vector: ...

def minimum_bounding_circle(self, input: Vector, individual_feature_hulls: bool = True) -> Vector: ...

def minimum_bounding_envelope(self, input: Vector, individual_feature_hulls: bool = True) -> Vector: ...

def minimum_convex_hull(self, input: Vector, individual_feature_hulls: bool = True) -> Vector: ...

def minimum_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def modified_k_means_clustering(self, input_rasters: List[Raster], output_html_file: str = "", num_start_clusters: int = 1000, merge_distance: float = 1.0, max_iterations: int = 10, percent_changed_threshold: float = 2.0) -> Raster: ...

# Requires WbW-Pro license
def modify_lidar(self, statement: str, input_lidar: Optional[Lidar]) -> Optional[Lidar]: ...

def modify_nodata_value(self, input: Raster, new_value: float = -32768.0) : ...

def mosaic(self, images: List[Raster], resampling_method: str = "cc") -> Raster: ...

def mosaic_with_feathering(self, image1: Raster, image2: Raster, resampling_method: str = "cc", distance_weight: float = 4.0) -> Raster: ...

def multidirectional_hillshade(self, dem: Raster, altitude: float = 30.0, z_factor: float = 1.0, full_360_mode: bool = False) -> Raster: ...

def multipart_to_singlepart(self, input: Vector, exclude_holes: bool = False) -> Vector: ...

def multiply_overlay(self, input_rasters: List[Raster]) -> Raster: ...

# Requires WbW-Pro license
def multiscale_curvatures(self, dem: Raster, curv_type: str = 'profile', min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0, log_transform: bool = True, standardize: bool = False) -> Tuple[Raster, Raster]: ...

def multiscale_elevation_percentile(self, dem: Raster, num_significant_digits: int = 3, min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0) -> Tuple[Raster, Raster]: ...

def multiscale_roughness(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

def multiscale_roughness_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> None: ...

def multiscale_std_dev_normals(self, dem: Raster, min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0, html_signature_file: str = "") -> Tuple[Raster, Raster]: ...

def multiscale_std_dev_normals_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0) -> None: ...

def multiscale_topographic_position_image(self, local: Raster, meso: Raster, broad: Raster, hillshade: Optional[Raster] = None, lightness: float = 1.2) -> Raster: ...

def narrowness_index(self, raster: Raster) -> Raster: ...

def natural_neighbour_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None, clip_to_hull: bool = True) -> Raster: ...

def nearest_neighbour_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None, max_dist: float = float('inf')) -> Raster: ...

def new_raster_from_base_raster(self, base: Raster, out_val: float = float('nan'), data_type: str = "float") -> Raster: ...

def new_raster_from_base_vector(self, base: Vector, cell_size: float, out_val: float = float('nan'), data_type: str = "float") -> Raster: ...

# Requires WbW-Pro license
def nibble(self, input_raster: Raster, mask: Raster, use_nodata: bool = False, nibble_nodata: bool = True) -> Raster: ...

def normal_vectors(self, input: Lidar, search_radius: float = -1.0) -> Lidar: ...

def normalized_difference_index(self, nir_image: Raster, red_image: Raster, clip_percent: float = 0.0, correction_value: float = 0.0) -> Raster: ...

def normalize_lidar(self, input_lidar: Lidar, dtm: Raster) -> Lidar: ...

def num_downslope_neighbours(self, dem: Raster) -> Raster: ...

def num_inflowing_neighbours(self, dem: Raster) -> Raster: ...

def olympic_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def opening(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

# Requires WbW-Pro license
def openness(self, dem: Raster, dist: int = 20) -> Tuple[Raster, Raster]: ...

def otsu_thresholding(self, raster: Raster) -> Raster: ...

def paired_sample_t_test(self, raster1: Raster, raster2: Raster, output_html_file: str, num_samples: int) -> None: ...

def panchromatic_sharpening(self, pan: Raster, colour_composite: Raster, red: Raster, green: Raster, blue: Raster, fusion_method: str = "brovey") -> Raster: ...

# Requires WbW-Pro license
def parallelepiped_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str) -> Raster: ...

def patch_orientation(self, input: Vector) -> Vector: ...

def pennock_landform_classification(self, dem: Raster, slope_threshold: float = 3.0, prof_curv_threshold: float = 0.1, plan_curv_threshold: float = 0.0, z_factor: float = 1.0) -> Tuple[Raster, str]: ...

def percent_elev_range(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def percent_equal_to(self, input_rasters: List[Raster], comparison: Raster) -> Raster: ...

def percent_greater_than(self, input_rasters: List[Raster], comparison: Raster) -> Raster: ...

def percent_less_than(self, input_rasters: List[Raster], comparison: Raster) -> Raster: ...

def percentage_contrast_stretch(self, raster: Raster, clip: float = 1.0, tail: str = "both", num_tones: int = 256) -> Raster: ...

def percentile_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

def perimeter_area_ratio(self, input: Vector) -> Vector: ...

def pick_from_list(self, input_rasters: List[Raster], pos_input: Raster) -> Raster: ...

# Requires WbW-Pro license
def piecewise_contrast_stretch(self, raster: Raster, transformation_statement: str, num_greytones: float = 1024.0) -> Raster: ...

# Requires WbW-Pro license
def phi_coefficient(self, raster1: Raster, raster2: Raster, output_html_file: str) -> None: ...

def plan_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def polygon_area(self, input: Vector) -> Vector: ...

def polygon_long_axis(self, input: Vector) -> Vector: ...

def polygon_perimeter(self, input: Vector) -> Vector: ...

def polygon_short_axis(self, input: Vector) -> Vector: ...

def polygonize(self, input_layers: List[Vector]) -> Vector: ...

def polygons_to_lines(self, input: Vector) -> Vector: ...

def prewitt_filter(self, raster: Raster, clip_tails: float = 0.0) -> Raster: ...

def principal_component_analysis(self, rasters: List[Raster], output_html_file: str, num_components: int = 2, standardized: bool = False) -> List[Raster]: ...

def print_geotiff_tags(self, file_name: str) : ...

def profile(self, lines_vector: Vector, surface: Raster, output_html_file: str) -> None: ...

def profile_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

# Requires WbW-Pro license
def prune_vector_streams(self, streams: Vector, dem: Raster, threshold: float, snap_distance: float = 0.001) -> Vector: ...

def qin_flow_accumulation(self, dem: Raster, out_type: str = "SCA", exponent: float = 10.0, max_slope: float = 45.0, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

def quantiles(self, raster: Raster, num_quantiles: int = 5) -> Raster: ...

def quinn_flow_accumulation(self, dem: Raster, out_type: str = "SCA", exponent: float = 1.1, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

def radial_basis_function_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, radius: float = 0.0, min_points: int = 0, cell_size: float = 0.0, base_raster: Optional[Raster] = None, func_type: str = "ThinPlateSpline", poly_order: str = "none", weight: float = 0.1) -> Raster: ...

def radius_of_gyration(self, raster: Raster) -> Tuple[Raster, str]: ...

def raise_walls(self, dem: Raster, walls: Vector, breach_lines: Vector, wall_height: float = 100.0) -> Raster: ...

def random_field(self, base_raster: Optional[Raster] = None) -> Raster: ...

# Requires WbW-Pro license
def random_forest_classification_fit(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, split_criterion: str = "Gini", n_trees: int = 200,  min_samples_leaf: int = 1, min_samples_split: int = 2, test_proportion: float = 0.2) -> List[int]: ...

# Requires WbW-Pro license
def random_forest_classification_predict(self, input_rasters: List[Raster], model_bytes: List[int]) -> Raster: ...

# Requires WbW-Pro license
def random_forest_regression_fit(self, input_rasters: List[Raster], training_data: Vector, field_name: str, n_trees: int = 200,  min_samples_leaf: int = 1, min_samples_split: int = 2, test_proportion: float = 0.2) -> List[int]: ...

# Requires WbW-Pro license
def random_forest_regression_predict(self, input_rasters: List[Raster], model_bytes: List[int]) -> Raster: ...

def random_sample(self, base_raster: Optional[Raster] = None, num_samples: int = 1000) -> Raster: ...

def range_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def raster_area(self, raster: Raster, units: str = "map units", zero_background: bool = False) -> Tuple[Raster, str]: ...

def raster_calculator(self, expression: str, input_rasters: List[Raster]) -> Raster: ...

def raster_cell_assignment(self, raster: Raster, what_to_assign: str = "column") -> Raster: ...

def raster_histogram(self, raster: Raster, output_html_file: str, num_bins: Optional[int] = None) -> None: ...

def raster_perimeter(self, raster: Raster, units: str = "map units", zero_background: bool = False) -> Tuple[Raster, str]: ...

def raster_streams_to_vector(self, streams: Raster, d8_pointer: Raster, esri_pointer: bool = False, all_vertices: bool = False) -> Vector: ...

def raster_summary_stats(self, input: Raster) -> str: ...

def raster_to_vector_lines(self, raster: Raster) -> Vector: ...

def raster_to_vector_points(self, raster: Raster) -> Vector: ...

def raster_to_vector_polygons(self, raster: Raster) -> Vector: ...

def rasterize_streams(self, streams: Vector, base_raster: Optional[Raster] = None, zero_background: bool = False, use_feature_id: bool = False) -> Raster: ...

def reciprocal(self, raster: Raster) -> Raster: ...

def reclass(self, raster: Raster, reclass_values: List[List[float]], assign_mode: bool = False) -> Raster: ...

def reclass_equal_interval(self, raster: Raster, interval_size: float, start_value: float = float('-inf'), end_value: float = float('inf')) -> Raster: ...

# Requires WbW-Pro license
def reconcile_multiple_headers(self, input: Vector, region_field_name: str, yield_field_name: str, radius: float, min_yield: float = float('-inf'),  max_yield: float = float('inf'), mean_tonnage: float = float('-inf')) -> Vector: ...

# Requires WbW-Pro license
def recover_flightline_info(self, input: Lidar, max_time_diff: float = 5.0, pt_src_id: bool = False, user_data: bool = False, rgb: bool = False) -> Lidar: ...

# Requires WbW-Pro license
def recreate_pass_lines(self, input: Vector, yield_field_name: str, max_change_in_heading: float = 25.0, ignore_zeros: bool = False) -> Tuple[Vector, Vector]: ...

def rectangular_grid_from_raster_base(self, base: Raster, width: float, height: float, x_origin: float = 0.0, y_origin: float = 0.0) -> Vector: ...

def rectangular_grid_from_vector_base(self, base: Vector, width: float, height: float, x_origin: float = 0.0, y_origin: float = 0.0) -> Vector: ...

def reinitialize_attribute_table(self, input: Vector) -> None: ...

def related_circumscribing_circle(self, input: Vector) -> Vector: ...

def relative_aspect(self, dem: Raster, azimuth: float = 0.0, z_factor: float = 1.0) -> Raster: ...

def relative_stream_power_index(self, specific_catchment_area: Raster, slope: Raster, exponent: float = 1.0) -> Raster: ...

def relative_topographic_position(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def remove_duplicates(self, input: Lidar, include_z: bool = False) -> Lidar: ...

# Requires WbW-Pro license
def remove_field_edge_points(self, input: Vector, radius: float,  max_change_in_heading: float = 25.0, flag_edges: bool = False) -> Vector: ...

def remove_off_terrain_objects(self, dem: Raster, filter_size: int = 11, slope_threshold: float = 15.0) -> Raster: ...

def remove_polygon_holes(self, input: Vector) -> Vector: ...

# Requires WbW-Pro license
def remove_raster_polygon_holes(self, input: Raster, threshold_size: int = sys.maxsize, use_diagonals: bool = False) -> Raster: ...

def remove_short_streams(self, d8_pntr: Raster, streams_raster: Raster, min_length: float = 0.0, esri_pntr: bool = False) -> Raster: ...

def remove_spurs(self, raster: Raster, max_iterations: int = 10) -> Raster: ...

def repair_stream_vector_topology(self, input: Vector, snap_dist: float) -> Vector: ...

def resample(self, input_rasters: List[Raster], cell_size: float = 0.0, base_raster: Optional[Raster] = None, method: str = "cc") -> Raster: ...

def rescale_value_range(self, raster: Raster, out_min_val: float, out_max_val: float, clip_min: float = float('inf'), clip_max: float = float('-inf')) -> Raster: ...

def rgb_to_ihs(self, red: Optional[Raster] = None, green: Optional[Raster] = None, blue: Optional[Raster] = None, composite: Optional[Raster] = None) -> Tuple[Raster, Raster, Raster]: ...

def rho8_flow_accum(self, raster: Raster, out_type: str = "SCA", log_transform: bool = False, clip: bool = False, input_is_pointer: bool = False, esri_pntr: bool = False) -> Raster: ...

def rho8_pointer(self, dem: Raster, esri_pntr: bool = False) -> Raster: ...

# Requires WbW-Pro license
def ridge_and_valley_vectors(self, dem: Raster, filter_size: int = 11, ep_threshold: float = 30.0, slope_threshold: float = 0.0, min_length: int = 20) -> Tuple[Vector, Vector]: ...

# Requires WbW-Pro license
def ring_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

# Requires WbW-Pro license
def river_centerlines(self, raster: Raster, min_length: int = 3, search_radius: int = 9) -> Vector: ...

def roberts_cross_filter(self, raster: Raster, clip_amount: float = 0.0) -> Raster: ...

def root_mean_square_error(self, input: Raster, reference: Raster) -> str: ...

# Requires WbW-Pro license
def rotor(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def ruggedness_index(self, input: Raster) -> Raster: ...

def scharr_filter(self, raster: Raster, clip_tails: float = 0.0) -> Raster: ...

def sediment_transport_index(self, specific_catchment_area: Raster, slope: Raster, sca_exponent: float = 0.4, slope_exponent: float = 1.3) -> Raster: ...

def select_tiles_by_polygon(self, input_directory: str, output_directory: str, polygons: Vector) -> None: ...

def set_nodata_value(self, raster: Raster, back_value: float = 0.0) -> Raster: ...

# Requires WbW-Pro license
def shadow_animation(self, dem: Raster, output_html_file: str,  palette: WbPalette = WbPalette.Soft, max_dist: float = float('inf'), date: str = "21/06/2021", time_interval: int = 30, location: str = "43.5448/-80.2482/-4", image_height: int = 600, delay: int = 250, label: str = "") -> None: ...

# Requires WbW-Pro license
def shadow_image(self, dem: Raster, palette: WbPalette = WbPalette.Soft, max_dist: float = float('inf'), date: str = "21/06/2021", time: str = "13:00", location: str = "43.5448/-80.2482/-4") -> Raster: ...

def shape_complexity_index_raster(self, raster: Raster) -> Raster: ...

def shape_complexity_index_vector(self, input: Vector) -> Vector: ...

# Requires WbW-Pro license
def shape_index(self, dem: Raster, z_factor: float = 1.0) -> Raster: ...

def shreve_stream_magnitude(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

# Requires WbW-Pro license
def sieve(self, input_raster: Raster, threshold: int = 1, zero_background: bool = False) -> Raster: ...

def sigmoidal_contrast_stretch(self, raster: Raster, cutoff: float = 0.0, gain: float = 1.0, num_tones: int = 256) -> Raster: ...

def singlepart_to_multipart(self, input: Vector, field_name: str) -> Vector: ...

def sink(self, dem: Raster, zero_background: bool = False) -> Raster: ...

# Requires WbW-Pro license
def skyline_analysis(self, dem: Raster, points: Vector, output_html_file: str, max_dist: float = float('inf'), observer_hgt_offset: float = 0.0, output_as_polygons: bool = True, az_fraction: float = 1.0) -> Vector: ...

# Requires WbW-Pro license
def sky_view_factor(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), observer_hgt_offset: float = 0.05) -> Raster: ...

def slope(self, dem: Raster, units: str = "degrees", z_factor: float = 1.0) -> Raster: ...

# Requires WbW-Pro license
def slope_vs_aspect_plot(self, dem: Raster, output_html_file: str, aspect_bin_size: float = 2.0, min_slope: float = 0.1, z_factor: float = 1.0) -> None: ...

def slope_vs_elev_plot(self, dem_rasters: List[Raster], output_html_file: str, watershed_rasters: List[Raster]) -> None: ...

def smooth_vectors(self, input: Vector, filter_size: int = 3) -> Vector: ...

# Requires WbW-Pro license
def smooth_vegetation_residual(self, dem: Raster, max_scale: int = 1, dev_threshold: float = 1.0, scale_threshold: int = 5) -> Raster: ...

def snap_pour_points(self, pour_pts: Vector, flow_accum: Raster, snap_dist: float = 0.0) -> Vector: ...

def sobel_filter(self, raster: Raster, variant: str = "3x3", clip_tails: float = 0.0) -> Raster: ...

# Requires WbW-Pro license
def sort_lidar(self, sort_criteria: str, input_lidar: Optional[Lidar]) -> Optional[Lidar]: ...

def spherical_std_dev_of_normals(self, dem: Raster, filter_size: int = 11) -> Raster: ...

def split_colour_composite(self, composite_image: Raster) -> Tuple[Raster, Raster, Raster]: ...

# Requires WbW-Pro license
def split_lidar(self, split_criterion: str, input_lidar: Optional[Lidar], interval: float = 5.0, min_pts: int = 5) -> None: ...

def split_vector_lines(self, input: Vector, segment_length: float) -> Vector: ...

def split_with_lines(self, input: Vector, split_vector: Vector) -> Vector: ...

def standard_deviation_contrast_stretch(self, raster: Raster, clip: float = 2.0, num_tones: int = 256) -> Raster: ...

def standard_deviation_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def standard_deviation_of_slope(self, dem: Raster, filter_size: int = 11, z_factor: float = 1.0) -> Raster: ...

def standard_deviation_overlay(self, input_rasters: List[Raster]) -> Raster: ...

def stochastic_depression_analysis(self, dem: Raster, rmse: float, range: float, iterations: int = 100) -> Raster: ...

def strahler_order_basins(self, d8_pointer: Raster, streams: Raster, esri_pntr: bool = False) -> Raster: ...

def strahler_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def stream_link_class(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def stream_link_identifier(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def stream_link_length(self, d8_pointer: Raster, streams_id_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def stream_link_slope(self, d8_pointer: Raster, streams_id_raster: Raster, dem: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def stream_slope_continuous(self, d8_pointer: Raster, streams_raster: Raster, dem: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

def subbasins(self, d8_pntr: Raster, streams: Raster, esri_pntr: bool = False) -> Raster: ...

def sum_overlay(self, input_rasters: List[Raster]) -> Raster: ...

def surface_area_ratio(self, dem: Raster) -> Raster: ...

# Requires WbW-Pro license
def svm_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, scaling_method: str = "none", c_value: float = 50.0, kernel_gamma: float = 0.5, tolerance: float = 0.1, test_proportion: float = 0.2, create_output: bool = False) -> Optional[Raster]: ...

# Requires WbW-Pro license
def svm_regression(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, scaling_method: str = "none", c_value: float = 50.0, epsilon_value: float = 10.0, kernel_gamma: float = 0.5, test_proportion: float = 0.2, create_output: bool = False) -> Optional[Raster]: ...

def symmetrical_difference(self, input: Vector, overlay: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

def tangential_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def thicken_raster_line(self, raster: Raster) -> Raster: ...

def time_in_daylight(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), latitude: float = 0.0, longitude: float = 0.0, utc_offset_str: str = "UTC+00:00", start_day: int = 1, end_day: int = 365, start_time: str = "sunrise", end_time: str = "sunset") -> Raster: ...

def tin_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None, max_triangle_edge_length: float = float('inf')) -> Raster: ...

def tophat_transform(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, variant: str = "white") -> Raster: ...

# Requires WbW-Pro license
def topological_breach_burn(self, streams: Vector, dem: Raster, snap_distance: float = 0.001) -> Tuple[Raster, Raster, Raster, Raster]: ...

def topological_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def topographic_hachures(self, dem: Raster, contour_interval: float = 10.0, base_contour: float = 0.0, deflection_tolerance: float = 10.0, filter_size: int = 9, separation: float = 2.0, distmin: float = 0.5, distmax: float = 2.0, discretization: float = 0.5, turnmax: float = 45.0, slopemin: float = 0.5, depth: int = 16) -> Vector: ...

# Requires WbW-Pro license
def topo_render(self, dem: Raster, palette: WbPalette = WbPalette.Soft, reverse_palette: bool = False,  azimuth: float = 315.0, altitude: float = 30.0, clipping_polygon: Optional[Vector] = None, background_hgt_offset: float = 10.0, background_clr: Tuple[int, int, int, int] = (255, 255, 255, 255), attenuation_parameter: float = 0.3,  ambient_light: float = 0.2, z_factor: float = 1.0) -> Raster: ...

# Requires WbW-Pro license
def topographic_position_animation(self, dem: Raster, output_html_file: str = "topo_pos.html", palette: WbPalette = WbPalette.Soft,  min_scale: int = 1, num_steps: int = 1, step_nonlinearity: float = 1.0, image_height: int = 600, delay: int = 250, label: str = "", use_dev_max: bool = False) -> None: ...

def total_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def total_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

def trace_downslope_flowpaths(self, seed_points: Vector, d8_pointer: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def travelling_salesman_problem(self, input: Vector, duration: int = 60) -> Vector: ...

def trend_surface(self, raster: Raster, output_html_file: str, polynomial_order: int = 1) -> Raster: ...

def trend_surface_vector_points(self, input: Vector, cell_size: float, output_html_file: str, field_name: str = "FID", polynomial_order: int = 1) -> Raster: ...

def tributary_identifier(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

def turning_bands_simulation(self, base_raster: Optional[Raster] = None, range: float = 1.0, iterations: int = 1000) -> Raster: ...

def two_sample_ks_test(self, raster1: Raster, raster2: Raster, output_html_file: str, num_samples: int) -> None: ...

def union(self, input: Vector, overlay: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

def unnest_basins(self, d8_pointer: Raster, pour_points: Vector, esri_pntr: bool = False) -> List[Raster]: ...

def unsharp_masking(self, raster: Raster, sigma: float = 0.75, amount: float = 100.0, threshold: float = 0.0) -> Raster: ...

# Requires WbW-Pro license
def unsphericity(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def update_nodata_cells(self, input1: Raster, input2: Raster) -> Raster: ...

def upslope_depression_storage(self, dem: Raster) -> Raster: ...

def user_defined_weights_filter(self, raster: Raster, weights: List[List[float]], kernel_center: str = "center", normalize_weights: bool = False) -> Raster: ...

def vector_hex_binning(self, vector_points: Vector, width: float, orientation: str = "horizontal") -> Vector: ...

def vector_lines_to_raster(self, input: Vector, field_name: str = "FID", zero_background: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None) -> Raster: ...

def vector_points_to_raster(self, input: Vector, field_name: str = "FID", assign_op: str = "last", zero_background: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None) -> Raster: ...

def vector_polygons_to_raster(self, input: Vector, field_name: str = "FID", zero_background: bool = False, cell_size: float = 0.0, base_raster: Optional[Raster] = None) -> Raster: ...

# Requires WbW-Pro license
def vertical_excess_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

def vector_stream_network_analysis(self, streams: Vector, dem: Raster, max_ridge_cutting_height: float = 10.0, snap_distance: float = 0.001) -> Tuple[Vector, Vector, Vector, Vector]: ...

def viewshed(self, dem: Raster, station_points: Vector, station_height: float = 2.0) -> Raster: ...

def visibility_index(self, dem: Raster, station_height: float = 2.0, resolution_factor: int = 8) -> Raster: ...

def voronoi_diagram(self, input_points: Vector) -> Vector: ...

def watershed(self, d8_pointer: Raster, pour_points: Vector, esri_pntr: bool = False) -> Raster: ...

def watershed_from_raster_pour_points(self, d8_pointer: Raster, pour_points: Raster, esri_pntr: bool = False) -> Raster: ...

def weighted_overlay(self, factors: List[Raster], weights: List[float], cost: Optional[List[Raster]] = None, constraints: Optional[List[Raster]] = None, scale_max: float = 1.0) -> Raster: ...

def weighted_sum(self, input_rasters: List[Raster], weights: List[float]) -> Raster: ...

def wetness_index(self, specific_catchment_area: Raster, slope: Raster) -> Raster: ...

def wilcoxon_signed_rank_test(self, raster1: Raster, raster2: Raster, output_html_file: str, num_samples: int) -> None: ...

def write_function_memory_insertion(self, image1: Raster, image2: Raster, image3: Raster) -> Raster: ...

# Requires WbW-Pro license
def yield_filter(self, input: Vector, yield_field_name: str, pass_field_name: str,  swath_width: float = 6.096, z_score_threshold: float = 2.5, min_yield: float = 0.0, max_yield: float = float('inf')) -> Vector: ...

# Requires WbW-Pro license
def yield_map(self, input: Vector, pass_field_name: str, swath_width: float = 6.096, max_change_in_heading: float = 25.0) -> Vector: ...

# Requires WbW-Pro license
def yield_normalization(self, input: Vector, yield_field_name: str,  radius: float, standardize: bool = False, min_yield: float = 0.0, max_yield: float = float('inf')) -> Vector: ...

def z_scores(self, raster: Raster) -> Raster: ...

def zonal_statistics(self, data_raster: Raster, feature_definitions_raster: Raster, stat_type: str = "mean", zero_is_background: bool = False) -> Tuple[Raster, str]: ...

class WbPalette

Atlas: int
HighRelief: int
Arid: int
Earthtones: int
Soft: int
Muted: int
LightQuant: int
Turbo: int
Purple: int
Viridis: int
GreenYellow: int
PinkYellowGreen: int
BlueYellowRed: int
Deep: int
Imhof: int
White: int
Grey: int```

WbW function documentation

Each of the following functions are methods of WbEnvironment class. Tools may be called using the convention in the following example:

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()
# Set up the environment, e.g. working directory, verbose mode, num_procs
raster = wbe.read_raster('my_raster.tif') # Read some kind of data
result = wbe.mean_filter(raster) # Call some kind of function
...
  1. adaptive_filter
  2. add_point_coordinates_to_table
  3. aggregate_raster
  4. anova
  5. ascii_to_las
  6. aspect
  7. attribute_correlation
  8. attribute_histogram
  9. attribute_scattergram
  10. available_functions
  11. average_flowpath_slope
  12. average_normal_vector_angular_deviation
  13. average_overlay
  14. average_upslope_flowpath_length
  15. balance_contrast_enhancement
  16. basins
  17. bilateral_filter
  18. block_maximum
  19. block_minimum
  20. bool_and
  21. bool_not
  22. bool_or
  23. bool_xor
  24. boundary_shape_complexity
  25. breach_depressions_least_cost
  26. breach_single_cell_pits
  27. buffer_raster
  28. burn_streams_at_roads
  29. centroid_raster
  30. centroid_vector
  31. change_vector_analysis
  32. check_in_license
  33. circular_variance_of_aspect
  34. classify_buildings_in_lidar
  35. classify_overlap_points
  36. clean_vector
  37. clip
  38. clip_lidar_to_polygon
  39. clip_raster_to_polygon
  40. closing
  41. clump
  42. compactness_ratio
  43. conservative_smoothing_filter
  44. construct_vector_tin
  45. contours_from_points
  46. contours_from_raster
  47. convert_nodata_to_zero
  48. corner_detection
  49. correct_vignetting
  50. cost_allocation
  51. cost_distance
  52. cost_pathway
  53. count_if
  54. create_colour_composite
  55. create_plane
  56. crispness_index
  57. cross_tabulation
  58. csv_points_to_vector
  59. cumulative_distribution
  60. d8_flow_accum
  61. d8_mass_flux
  62. d8_pointer
  63. depth_in_sink
  64. deviation_from_mean_elevation
  65. deviation_from_regional_direction
  66. diff_of_gaussians_filter
  67. difference
  68. difference_from_mean_elevation
  69. dinf_flow_accum
  70. dinf_mass_flux
  71. dinf_pointer
  72. direct_decorrelation_stretch
  73. directional_relief
  74. dissolve
  75. distance_to_outlet
  76. diversity_filter
  77. downslope_distance_to_stream
  78. downslope_flowpath_length
  79. downslope_index
  80. edge_contamination
  81. edge_density
  82. edge_preserving_mean_filter
  83. edge_proportion
  84. elev_relative_to_min_max
  85. elev_relative_to_watershed_min_max
  86. elevation_above_pit
  87. elevation_above_stream
  88. elevation_above_stream_euclidean
  89. elevation_percentile
  90. eliminate_coincident_points
  91. elongation_ratio
  92. embankment_mapping
  93. emboss_filter
  94. erase
  95. erase_polygon_from_lidar
  96. erase_polygon_from_raster
  97. euclidean_allocation
  98. euclidean_distance
  99. export_table_to_csv
  100. exposure_towards_wind_flux
  101. extend_vector_lines
  102. extract_by_attribute
  103. extract_nodes
  104. extract_raster_values_at_points
  105. extract_streams
  106. extract_valleys
  107. farthest_channel_head
  108. fast_almost_gaussian_filter
  109. fd8_flow_accum
  110. fd8_pointer
  111. feature_preserving_smoothing
  112. fetch_analysis
  113. fill_burn
  114. fill_depressions
  115. fill_depressions_planchon_and_darboux
  116. fill_depressions_wang_and_liu
  117. fill_missing_data
  118. fill_pits
  119. filter_lidar_classes
  120. filter_lidar_scan_angles
  121. filter_raster_features_by_area
  122. find_flightline_edge_points
  123. find_lowest_or_highest_points
  124. find_main_stem
  125. find_noflow_cells
  126. find_parallel_flow
  127. find_patch_edge_cells
  128. find_ridges
  129. flatten_lakes
  130. flightline_overlap
  131. flip_image
  132. flood_order
  133. flow_accum_full_workflow
  134. flow_length_diff
  135. gamma_correction
  136. gaussian_contrast_stretch
  137. gaussian_curvature
  138. gaussian_filter
  139. geomorphons
  140. hack_stream_order
  141. heat_map
  142. height_above_ground
  143. hexagonal_grid_from_raster_base
  144. hexagonal_grid_from_vector_base
  145. high_pass_filter
  146. high_pass_median_filter
  147. highest_position
  148. hillshade
  149. hillslopes
  150. histogram_equalization
  151. histogram_matching
  152. histogram_matching_two_images
  153. hole_proportion
  154. horizon_angle
  155. horton_ratios
  156. horton_stream_order
  157. hypsometric_analysis
  158. hypsometrically_tinted_hillshade
  159. idw_interpolation
  160. ihs_to_rgb
  161. image_autocorrelation
  162. image_correlation
  163. image_correlation_neighbourhood_analysis
  164. image_regression
  165. image_stack_profile
  166. impoundment_size_index
  167. individual_tree_detection
  168. insert_dams
  169. integral_image_transform
  170. intersect
  171. isobasins
  172. jenson_snap_pour_points
  173. join_tables
  174. k_means_clustering
  175. k_nearest_mean_filter
  176. kappa_index
  177. ks_normality_test
  178. laplacian_filter
  179. laplacian_of_gaussians_filter
  180. las_to_ascii
  181. las_to_shapefile
  182. layer_footprint_raster
  183. layer_footprint_vector
  184. lee_filter
  185. length_of_upstream_channels
  186. license_type
  187. lidar_block_maximum
  188. lidar_block_minimum
  189. lidar_classify_subset
  190. lidar_colourize
  191. lidar_construct_vector_tin
  192. lidar_digital_surface_model
  193. lidar_elevation_slice
  194. lidar_ground_point_filter
  195. lidar_hex_bin
  196. lidar_hillshade
  197. lidar_histogram
  198. lidar_idw_interpolation
  199. lidar_info
  200. lidar_join
  201. lidar_kappa
  202. lidar_nearest_neighbour_gridding
  203. lidar_point_density
  204. lidar_point_stats
  205. lidar_radial_basis_function_interpolation
  206. lidar_ransac_planes
  207. lidar_remove_outliers
  208. lidar_rooftop_analysis
  209. lidar_segmentation
  210. lidar_segmentation_based_filter
  211. lidar_shift
  212. lidar_thin
  213. lidar_thin_high_density
  214. lidar_tile
  215. lidar_tile_footprint
  216. lidar_tin_gridding
  217. lidar_tophat_transform
  218. line_detection_filter
  219. line_intersections
  220. line_thinning
  221. linearity_index
  222. lines_to_polygons
  223. list_unique_values
  224. list_unique_values_raster
  225. long_profile
  226. long_profile_from_points
  227. longest_flowpath
  228. lowest_position
  229. majority_filter
  230. map_off_terrain_objects
  231. max_absolute_overlay
  232. max_anisotropy_dev
  233. max_anisotropy_dev_signature
  234. max_branch_length
  235. max_difference_from_mean
  236. max_downslope_elev_change
  237. max_elevation_dev_signature
  238. max_elevation_deviation
  239. max_overlay
  240. max_procs
  241. max_upslope_elev_change
  242. max_upslope_flowpath_length
  243. max_upslope_value
  244. maximal_curvature
  245. maximum_filter
  246. mdinf_flow_accum
  247. mean_curvature
  248. mean_filter
  249. median_filter
  250. medoid
  251. merge_line_segments
  252. merge_table_with_csv
  253. merge_vectors
  254. min_absolute_overlay
  255. min_downslope_elev_change
  256. min_max_contrast_stretch
  257. min_overlay
  258. minimal_curvature
  259. minimum_bounding_box
  260. minimum_bounding_circle
  261. minimum_bounding_envelope
  262. minimum_convex_hull
  263. minimum_filter
  264. modified_k_means_clustering
  265. modified_shepard_interpolation
  266. modify_nodata_value
  267. mosaic
  268. mosaic_with_feathering
  269. multidirectional_hillshade
  270. multipart_to_singlepart
  271. multiply_overlay
  272. multiscale_elevation_percentile
  273. multiscale_roughness
  274. multiscale_roughness_signature
  275. multiscale_std_dev_normals
  276. multiscale_std_dev_normals_signature
  277. multiscale_topographic_position_image
  278. narrowness_index
  279. natural_neighbour_interpolation
  280. nearest_neighbour_interpolation
  281. new_lidar
  282. new_raster
  283. new_raster_from_base_raster
  284. new_raster_from_base_vector
  285. new_vector
  286. normal_vectors
  287. normalize_lidar
  288. normalized_difference_index
  289. num_downslope_neighbours
  290. num_inflowing_neighbours
  291. olympic_filter
  292. opening
  293. otsu_thresholding
  294. paired_sample_t_test
  295. panchromatic_sharpening
  296. patch_orientation
  297. pennock_landform_classification
  298. percent_elev_range
  299. percent_equal_to
  300. percent_greater_than
  301. percent_less_than
  302. percentage_contrast_stretch
  303. percentile_filter
  304. perimeter_area_ratio
  305. pick_from_list
  306. plan_curvature
  307. polygon_area
  308. polygon_long_axis
  309. polygon_perimeter
  310. polygon_short_axis
  311. polygonize
  312. polygons_to_lines
  313. prewitt_filter
  314. principal_component_analysis
  315. print_geotiff_tags
  316. profile
  317. profile_curvature
  318. qin_flow_accumulation
  319. quantiles
  320. quinn_flow_accumulation
  321. radial_basis_function_interpolation
  322. radius_of_gyration
  323. raise_walls
  324. random_field
  325. random_sample
  326. range_filter
  327. raster_area
  328. raster_calculator
  329. raster_cell_assignment
  330. raster_histogram
  331. raster_perimeter
  332. raster_streams_to_vector
  333. raster_summary_stats
  334. raster_to_vector_lines
  335. raster_to_vector_points
  336. raster_to_vector_polygons
  337. rasterize_streams
  338. read_lidar
  339. read_lidars
  340. read_raster
  341. read_rasters
  342. read_vector
  343. read_vectors
  344. reciprocal
  345. reclass
  346. reclass_equal_interval
  347. rectangular_grid_from_raster_base
  348. rectangular_grid_from_vector_base
  349. reinitialize_attribute_table
  350. related_circumscribing_circle
  351. relative_aspect
  352. relative_stream_power_index
  353. relative_topographic_position
  354. remove_duplicates
  355. remove_off_terrain_objects
  356. remove_polygon_holes
  357. remove_short_streams
  358. remove_spurs
  359. repair_stream_vector_topology
  360. resample
  361. rescale_value_range
  362. rgb_to_ihs
  363. rho8_flow_accum
  364. rho8_pointer
  365. roberts_cross_filter
  366. root_mean_square_error
  367. ruggedness_index
  368. scharr_filter
  369. sediment_transport_index
  370. select_tiles_by_polygon
  371. set_nodata_value
  372. shape_complexity_index_raster
  373. shape_complexity_index_vector
  374. shreve_stream_magnitude
  375. sigmoidal_contrast_stretch
  376. singlepart_to_multipart
  377. sink
  378. slope
  379. slope_vs_elev_plot
  380. smooth_vectors
  381. snap_pour_points
  382. sobel_filter
  383. spherical_std_dev_of_normals
  384. split_colour_composite
  385. split_vector_lines
  386. split_with_lines
  387. standard_deviation_contrast_stretch
  388. standard_deviation_filter
  389. standard_deviation_of_slope
  390. standard_deviation_overlay
  391. stochastic_depression_analysis
  392. strahler_order_basins
  393. strahler_stream_order
  394. stream_link_class
  395. stream_link_identifier
  396. stream_link_length
  397. stream_link_slope
  398. stream_slope_continuous
  399. subbasins
  400. sum_overlay
  401. surface_area_ratio
  402. symmetrical_difference
  403. tangential_curvature
  404. thicken_raster_line
  405. time_in_daylight
  406. tin_interpolation
  407. tophat_transform
  408. topographic_hachures
  409. topological_stream_order
  410. total_curvature
  411. total_filter
  412. trace_downslope_flowpaths
  413. travelling_salesman_problem
  414. trend_surface
  415. trend_surface_vector_points
  416. tributary_identifier
  417. turning_bands_simulation
  418. two_sample_ks_test
  419. union
  420. unnest_basins
  421. unsharp_masking
  422. update_nodata_cells
  423. upslope_depression_storage
  424. user_defined_weights_filter
  425. vector_hex_binning
  426. vector_lines_to_raster
  427. vector_points_to_raster
  428. vector_polygons_to_raster
  429. vector_stream_network_analysis
  430. verbose
  431. version
  432. viewshed
  433. visibility_index
  434. voronoi_diagram
  435. watershed
  436. watershed_from_raster_pour_points
  437. weighted_overlay
  438. weighted_sum
  439. wetness_index
  440. wilcoxon_signed_rank_test
  441. working_directory
  442. write_function_memory_insertion
  443. write_lidar
  444. write_raster
  445. write_text
  446. write_vector
  447. z_scores
  448. zonal_statistics

adaptive_filter

This tool performs a type of adaptive filter on a raster image. An adaptive filter can be used to reduce the level of random noise (shot noise) in an image. The algorithm operates by calculating the average value in a moving window centred on each grid cell. If the absolute difference between the window mean value and the centre grid cell value is beyond a user-defined threshold (threshold), the grid cell in the output image is assigned the mean value, otherwise it is equivalent to the original value. Therefore, the algorithm only modifies the image where grid cell values are substantially different than their neighbouring values.

Neighbourhood size, or filter size, is specified in the x and y dimensions using filterx and filtery. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

mean_filter

Function Signature

def adaptive_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, threshold: float = 2.0) -> Raster: ...

add_point_coordinates_to_table

Description

This tool modifies the attribute table of a vector of POINT VectorGeometryType by adding two fields, XCOORD and YCOORD, containing each point's X and Y coordinates respectively.

Parameters

input (Vector): The input Vector object

Returns

Vector: the returning value

Function Signature

def add_point_coordinates_to_table(self, input: Vector) -> Vector: ...

aggregate_raster

This tool can be used to reduce the grid resolution of a raster by a user specified amount. For example, using an aggregation factor (agg_factor) of 2 would result in a raster with half the number of rows and columns. The grid cell values (type) in the output image will consist of the mean, sum, maximum, minimum, or range of the overlapping grid cells in the input raster (four cells in the case of an aggregation factor of 2).

See Also

resample

Function Signature

def aggregate_raster(self, raster: Raster, aggregation_factor: int = 2, aggregation_type: str = "mean") -> Raster: ...

anova

This tool performs an Analysis of variance (ANOVA) test on the distribution of values in a raster (input) among a group of features (features). The ANOVA report is written to an output HTML report (output).

Function Signature

def anova(self, input_raster: Raster, features_raster: Raster, output_html_file: str) -> None: ...

ascii_to_las

This tool can be used to convert one or more ASCII files, containing LiDAR point data, into LAS files. The user must specify the name(s) of the input ASCII file(s) (inputs). Each input file will have a correspondingly named output file with a .las file extension. The output point data, each on a separate line, will take the format:

x,y,z,intensity,class,return,num_returns"
ValueInterpretation
xx-coordinate
yy-coordinate
zelevation
iintensity value
cclassification
rnreturn number
nrnumber of returns
timeGPS time
sascan angle
rred
bblue
ggreen

The x, y, and z patterns must always be specified. If the rn pattern is used, the nr pattern must also be specified. Examples of valid pattern string include:

'x,y,z,i'
'x,y,z,i,rn,nr'
'x,y,z,i,c,rn,nr,sa'
'z,x,y,rn,nr'
'x,y,z,i,rn,nr,r,g,b'

Use the las_to_ascii tool to convert a LAS file into a text file containing LiDAR point data.

See Also

las_to_ascii

Function Signature

def ascii_to_las(self, input_ascii_files: List[str], pattern: str, epsg_code: int) -> None: ...

aspect

This tool calculates slope aspect (i.e. slope orientation in degrees clockwise from north) for each grid cell in an input digital elevation model (DEM). The user must specify an input DEM (dem). The Z conversion factor is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z conversion factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of each raster row, in radians.

The tool uses Horn's (1981) 3rd-order finite difference method to estimate slope. Given the following clock-type grid cell numbering scheme (Gallant and Wilson, 2000),

| 7 | 8 | 1 |
| 6 | 9 | 2 |
| 5 | 4 | 3 |

aspect = 180 - arctan(fy / fx) + 90(fx / |fx|)

where,

fx = (z3 - z5 + 2(z2 - z6) + z1 - z7) / 8 * Δx

and,

fy = (z7 - z5 + 2(z8 - z4) + z1 - z3) / 8 * Δy

Δx and Δy are the grid resolutions in the x and y direction respectively

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

See Also

slope, plan_curvature, profile_curvature

Function Signature

def aspect(self, dem: Raster, z_factor: float = 1.0) -> Raster: ...

attribute_correlation

This tool can be used to estimate the Pearson product-moment correlation coefficient (r) for each pair among a group of attributes associated with the database file of a shapefile. The r-value is a measure of the linear association in the variation of the attributes. The coefficient ranges from -1, indicated a perfect negative linear association, to 1, indicated a perfect positive linear association. An r-value of 0 indicates no correlation between the test variables.

Notice that this index is a measure of the linear association; two variables may be strongly related by a non-linear association (e.g. a power function curve) which will lead to an apparent weak association based on the Pearson coefficient. In fact, non-linear associations are very common among spatial variables, e.g. terrain indices such as slope and contributing area. In such cases, it is advisable that the input images are transformed prior to the estimation of the Pearson coefficient, or that an alternative, non-parametric statistic be used, e.g. the Spearman rank correlation coefficient.

The user must specify the name of the input vector Shapefile (input). Correlations will be calculated for each pair of numerical attributes contained within the input file's attribute table and presented in a correlation matrix HMTL output (output).

See Also

image_correlation, attribute_scattergram, attribute_histogram

Function Signature

def attribute_correlation(self, input: Vector, output_html_file: str) -> None: ...

attribute_histogram

This tool can be used to create a histogram, which is a graph displaying the frequency distribution of data, for the values contained in a field of an input vector's attribute table. The user must specify the name of an input vector (input) and the name of one of the fields (field) contained in the associated attribute table. The tool output (output) is an HTML formatted histogram analysis report. If the specified field is non-numerical, the tool will produce a bar-chart of class frequency, similar to the tabular output of the list_unique_values tool.

See Also

list_unique_values, raster_histogram

Function Signature

def attribute_histogram(self, input: Vector, field_name: str, output_html_file: str) -> None: ...

attribute_scattergram

This tool can be used to create a scattergram for two numerical fields (fieldx and fieldy) contained within an input vector's attribute table (input). The user must specify the name of an input shapefile and the name of two of the fields contained it the associated attribute table. The tool output (output) is an HTML formatted report containing a graphical scattergram plot.

See Also

attribute_histogram, attribute_correlation

Function Signature

def attribute_scattergram(self, input: Vector, field_name_x: str, field_name_y: str, output_html_file: str, add_trendline: bool = False) -> None: ...

available_functions

This function will list all of the available functions associated with a WbEnvironment (wbe). The functions that are accessible will depend on the license level (WbW or WbWPro).

average_flowpath_slope

This tool calculates the average slope gradient (i.e. slope steepness in degrees) of the flowpaths that pass through each grid cell in an input digital elevation model (DEM). The user must specify the name of a DEM raster (dem). It is important that this DEM is pre-processed to remove all topographic depressions and flat areas using a tool such as breach_depressions_least_cost. Several intermediate rasters are created and stored in memory during the operation of this tool, which may limit the size of DEM that can be processed, depending on available system resources.

See Also

average_upslope_flowpath_length, breach_depressions_least_cost

Function Signature

def average_flowpath_slope(self, dem: Raster) -> Raster: ...

average_normal_vector_angular_deviation

This tool characterizes the spatial distribution of the average normal vector angular deviation, a measure of surface roughness. Working in the field of 3D printing, Ko et al. (2016) defined a measure of surface roughness based on quantifying the angular deviations in the direction of the normal vector of a real surface from its ideal (i.e. smoothed) form. This measure of surface complexity is therefore in units of degrees. Specifically, roughness is defined in this study as the neighborhood-averaged difference in the normal vectors of the original DEM and a smoothed DEM surface. Smoothed surfaces are derived by applying a Gaussian blur of the same size as the neighborhood (filter).

The multiscale_roughness tool calculates the same measure of surface roughness, except that it is designed to work with multiple spatial scales.

Reference

Ko, M., Kang, H., ulrim Kim, J., Lee, Y., & Hwang, J. E. (2016, July). How to measure quality of affordable 3D printing: Cultivating quantitative index in the user community. In International Conference on Human-Computer Interaction (pp. 116-121). Springer, Cham.

Lindsay, J. B., & Newman, D. R. (2018). Hyper-scale analysis of surface roughness. PeerJ Preprints, 6, e27110v1.

See Also

multiscale_roughness, spherical_std_dev_of_normals, circular_variance_of_aspect

Function Signature

def average_normal_vector_angular_deviation(self, dem: Raster, filter_size: int = 11) -> Raster: ...

average_overlay

This tool can be used to find the average value in each cell of a grid from a set of input images (inputs). It is therefore similar to the weighted_sum tool except that each input image is given equal weighting. This tool operates on a cell-by-cell basis. Therefore, each of the input rasters must share the same number of rows and columns and spatial extent. An error will be issued if this is not the case. At least two input rasters are required to run this tool. Like each of the WhiteboxTools overlay tools, this tool has been optimized for parallel processing.

See Also

weighted_sum

Function Signature

def average_overlay(self, input_rasters: List[Raster]) -> Raster: ...

average_upslope_flowpath_length

This tool calculates the average slope gradient (i.e. slope steepness in degrees) of the flowpaths that pass through each grid cell in an input digital elevation model (DEM). The user must specify the name of a DEM raster (dem). It is important that this DEM is pre-processed to remove all topographic depressions and flat areas using a tool such as breach_depressions_least_cost. Several intermediate rasters are created and stored in memory during the operation of this tool, which may limit the size of DEM that can be processed, depending on available system resources.

See Also

average_upslope_flowpath_length, breach_depressions_least_cost

Function Signature

def average_upslope_flowpath_length(self, dem: Raster) -> Raster: ...

balance_contrast_enhancement

This tool can be used to reduce colour bias in a colour composite image based on the technique described by Liu (1991). Colour bias is a common phenomena with colour images derived from multispectral imagery, whereby a higher average brightness value in one band results in over-representation of that band in the colour composite. The tool essentially applies a parabolic stretch to each of the three bands in a user specified RGB colour composite, forcing the histograms of each band to have the same minimum, maximum, and average values while maintaining their overall histogram shape. For greater detail on the operation of the tool, please see Liu (1991). Aside from the names of the input and output colour composite images, the user must also set the value of E, the desired output band mean, where 20 < E < 235.

Reference

Liu, J.G. (1991) Balance contrast enhancement technique and its application in image colour composition. International Journal of Remote Sensing, 12:10.

See Also

direct_decorrelation_stretch, histogram_matching, histogram_matching_two_images, histogram_equalization, gaussian_contrast_stretch

Function Signature

def balance_contrast_enhancement(self, image: Raster, band_mean: float = 100.0) -> Raster: ...

basins

This tool can be used to delineate all of the drainage basins contained within a local drainage direction, or flow pointer raster (d8_pntr), and draining to the edge of the data. The flow pointer raster must be derived using the d8_pointer tool and should have been extracted from a digital elevation model (DEM) that has been hydrologically pre-processed to remove topographic depressions and flat areas, e.g. using the breach_depressions_least_cost tool. By default, the flow pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools:

...
641281
3202
1684

If the pointer file contains ESRI flow direction values instead, the esri_pntr parameter must be specified.

The basins and watershed tools are similar in function but while the watershed tool identifies the upslope areas that drain to one or more user-specified outlet points, the basins tool automatically sets outlets to all grid cells situated along the edge of the data that do not have a defined flow direction (i.e. they do not have a lower neighbour). Notice that these edge outlets need not be situated along the edges of the flow-pointer raster, but rather along the edges of the region of valid data. That is, the DEM from which the flow-pointer has been extracted may incompletely fill the containing raster, if it is irregular shaped, and NoData regions may occupy the peripherals. Thus, the entire region of valid data in the flow pointer raster will be divided into a set of mutually exclusive basins using this tool.

See Also

watershed, d8_pointer, breach_depressions_least_cost

Function Signature

def basins(self, d8_pntr: Raster, esri_pntr: bool = False) -> Raster: ...

bilateral_filter

This tool can be used to perform an edge-preserving smoothing filter, or bilateral filter, on an image. A bilateral filter can be used to emphasize the longer-range variability in an image, effectively acting to smooth the image, while reducing the edge blurring effect common with other types of smoothing filters. As such, this filter is very useful for reducing the noise in an image. Bilateral filtering is a non-linear filtering technique introduced by Tomasi and Manduchi (1998). The algorithm operates by convolving a kernel of weights with each grid cell and its neighbours in an image. The bilateral filter is related to Gaussian smoothing, in that the weights of the convolution kernel are partly determined by the 2-dimensional Gaussian (i.e. normal) curve, which gives stronger weighting to cells nearer the kernel centre. Unlike the gaussian_filter, however, the bilateral kernel weightings are also affected by their similarity to the intensity value of the central pixel. Pixels that are very different in intensity from the central pixel are weighted less, also based on a Gaussian weight distribution. Therefore, this non-linear convolution filter is determined by the spatial and intensity domains of a localized pixel neighborhood.

The heavier weighting given to nearer and similar-valued pixels makes the bilateral filter an attractive alternative for image smoothing and noise reduction compared to the much-used Mean filter. The size of the filter is determined by setting the standard deviation distance parameter (sigma_dist); the larger the standard deviation the larger the resulting filter kernel. The standard deviation can be any number in the range 0.5-20 and is specified in the unit of pixels. The standard deviation intensity parameter (sigma_int), specified in the same units as the z-values, determines the intensity domain contribution to kernel weightings.

References

Tomasi, C., & Manduchi, R. (1998, January). Bilateral filtering for gray and color images. In null (p. 839). IEEE.

See Also

edge_preserving_mean_filter

Function Signature

def bilateral_filter(self, raster: Raster, sigma_dist: float = 0.75, sigma_int: float = 1.0) -> Raster: ...

block_maximum

Creates a raster grid based on a set of vector points and assigns grid values using a block maximum scheme.

Function Signature

def block_maximum(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Raster = None) -> Raster: ...

block_minimum

Creates a raster grid based on a set of vector points and assigns grid values using a block minimum scheme.

Function Signature

def block_minimum(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Raster = None) -> Raster: ...

bool_and

This tool is a Boolean AND operator, i.e. it works on True or False (1 and 0) values. Grid cells for which the first and second input rasters (input1; input2) have True values are assigned 1 in the output raster, otherwise grid cells are assigned a value of 0. All non-zero values in the input rasters are considered to be True, while all zero-valued grid cells are considered to be False. Grid cells containing NoData values in either of the input rasters will be assigned a NoData value in the output raster.

See Also

bool_not, bool_or, bool_xor

Function Signature

def bool_and(self, input1: Raster, input2: Raster) -> Raster: ...

bool_not

This tool is a Boolean NOT operator, i.e. it works on True or False (1 and 0) values. Grid cells for which the first input raster (input1) has a True value and the second raster (input2) has a False value are assigned 0 in the output raster, otherwise grid cells are assigned a value of 0. All non-zero values in the input rasters are considered to be True, while all zero-valued grid cells are considered to be False. Grid cells containing NoData values in either of the input rasters will be assigned a NoData value in the output raster. Notice that the Not operator is asymmetrical, and the order of inputs matters.

See Also

bool_and, bool_or, bool_xor

Function Signature

def bool_not(self, input1: Raster, input2: Raster) -> Raster: ...

bool_or

This tool is a Boolean OR operator, i.e. it works on True or False (1 and 0) values. Grid cells for which the either the first or second input rasters (input1; input2) have a True value are assigned 1 in the output raster, otherwise grid cells are assigned a value of 0. All non-zero values in the input rasters are considered to be True, while all zero-valued grid cells are considered to be False. Grid cells containing NoData values in either of the input rasters will be assigned a NoData value in the output raster.

See Also

bool_and, bool_not, bool_xor

Function Signature

def bool_or(self, input1: Raster, input2: Raster) -> Raster: ...

bool_xor

This tool is a Boolean XOR operator, i.e. it works on True or False (1 and 0) values. Grid cells for which either the first or second input rasters (input1; input2) have a True value but not both are assigned 1 in the output raster, otherwise grid cells are assigned a value of 0. All non-zero values in the input rasters are considered to be True, while all zero-valued grid cells are considered to be False. Grid cells containing NoData values in either of the input rasters will be assigned a NoData value in the output raster. Notice that the Not operator is asymmetrical, and the order of inputs matters.

See Also

bool_and, bool_not, bool_or

Function Signature

def bool_xor(self, input1: Raster, input2: Raster) -> Raster: ...

boundary_shape_complexity

This tools calculates a type of shape complexity index for raster objects, focused on the complexity of the boundary of polygons. The index uses the line_thinning tool to estimate a skeletonized network for each input raster polygon. The Boundary Shape Complexity (BSC) index is then calculated as the percentage of the skeletonized network belonging to exterior links. Polygons with more complex boundaries will possess more branching skeletonized networks, with each spur in the boundary possessing a short exterior branch. The two longest exterior links in the network are considered to be part of the main network. Therefore, polygons of complex shaped boundaries will have a higher percentage of their skeleton networks consisting of exterior links. It is expected that simple convex hulls should have relatively low BSC index values.

Objects in the input raster (input) are designated by their unique identifiers. Identifier values should be positive, non-zero whole numbers.

See Also

shape_complexity_index_raster, line_thinning

Function Signature

def boundary_shape_complexity(self, raster: Raster) -> Raster: ...

breach_depressions_least_cost

This tool can be used to perform a type of optimal depression breaching to prepare a digital elevation model (DEM) for hydrological analysis. Depression breaching is a common alternative to depression filling (fill_depressions) and often offers a lower-impact solution to the removal of topographic depressions. This tool implements a method that is loosely based on the algorithm described by Lindsay and Dhun (2015), furthering the earlier algorithm with efficiency optimizations and other significant enhancements. The approach uses a least-cost path analysis to identify the breach channel that connects pit cells (i.e. grid cells for which there is no lower neighbour) to some distant lower cell. Prior to breaching and in order to minimize the depth of breach channels, all pit cells are rised to the elevation of the lowest neighbour minus a small heigh value. Here, the cost of a breach path is determined by the amount of elevation lowering needed to cut the breach channel through the surrounding topography.

The user must specify the name of the input DEM file (dem), the output breached DEM file (output), the maximum search window radius (dist), the optional maximum breach cost (max_cost), and an optional flat height increment value (flat_increment). Notice that if the flat_increment parameter is not specified, the small number used to ensure flow across flats will be calculated automatically, which should be preferred in most applications of the tool. The tool operates by performing a least-cost path analysis for each pit cell, radiating outward until the operation identifies a potential breach destination cell or reaches the maximum breach length parameter. If a value is specified for the optional max_cost parameter, then least-cost breach paths that would require digging a channel that is more costly than this value will be left unbreached. The flat increment value is used to ensure that there is a monotonically descending path along breach channels to satisfy the necessary condition of a downslope gradient for flowpath modelling. It is best for this value to be a small value. If left unspecified, the tool with determine an appropriate value based on the range of elevation values in the input DEM, which should be the case in most applications. Notice that the need to specify these very small elevation increment values is one of the reasons why the output DEM will always be of a 64-bit floating-point data type, which will often double the storage requirements of a DEM (DEMs are often store with 32-bit precision). Lastly, the user may optionally choose to apply depression filling (fill) on any depressions that remain unresolved by the earlier depression breaching operation. This filling step uses an efficient filling method based on flooding depressions from their pit cells until outlets are identified and then raising the elevations of flooded cells back and away from the outlets.

The tool can be run in two modes, based on whether the min_dist is specified. If the min_dist flag is specified, the accumulated cost (accum2) of breaching from cell1 to cell2 along a channel issuing from pit is calculated using the traditional cost-distance function:

cost1 = z1 - (zpit + l × s)

cost2 = z2 - [zpit + (l + 1)s]

accum2 = accum1 + g(cost1 + cost2) / 2.0

where cost1 and cost2 are the costs associated with moving through cell1 and cell2 respectively, z1 and z2 are the elevations of the two cells, zpit is the elevation of the pit cell, l is the length of the breach channel to cell1, g is the grid cell distance between cells (accounting for diagonal distances), and s is the small number used to ensure flow across flats. If the min_dist flag is not present, the accumulated cost is calculated as:

accum2 = accum1 + cost2

That is, without the min_dist flag, the tool works to minimize elevation changes to the DEM caused by breaching, without considering the distance of breach channels. Notice that the value max_cost, if specified, should account for this difference in the way cost/cost-distances are calculated. The first cell in the least-cost accumulation operation that is identified for which cost2 <= 0.0 is the target cell to which the breach channel will connect the pit along the least-cost path.

In comparison with the breach_depressions_least_cost tool, this breaching method often provides a more satisfactory, lower impact, breaching solution and is often more efficient. It is therefore advisable that users try the breach_depressions_least_cost tool to remove depressions from their DEMs first. This tool is particularly well suited to breaching through road embankments. There are instances when a breaching solution is inappropriate, e.g. when a very deep depression such as an open-pit mine occurs in the DEM and long, deep breach paths are created. Often restricting breaching with the max_cost parameter, combined with subsequent depression filling (fill) can provide an adequate solution in these cases. Nonetheless, there are applications for which full depression filling using the fill_depressions tool may be preferred.

Reference

Lindsay J, Dhun K. 2015. Modelling surface drainage patterns in altered landscapes using LiDAR. International Journal of Geographical Information Science, 29: 1-15. DOI: 10.1080/13658816.2014.975715

See Also

breach_depressions_least_cost, fill_depressions, cost_pathway

Function Signature

def breach_depressions_least_cost(self, dem: Raster, max_cost: float = float('inf'), max_dist: int = 100, flat_increment: float = float('nan'), fill_deps: bool = False, minimize_dist: bool = False) -> Raster: ...

breach_single_cell_pits

This tool calculates the average slope gradient (i.e. slope steepness in degrees) of the flowpaths that pass through each grid cell in an input digital elevation model (DEM). The user must specify the name of a DEM raster (dem). It is important that this DEM is pre-processed to remove all topographic depressions and flat areas using a tool such as breach_depressions_least_cost. Several intermediate rasters are created and stored in memory during the operation of this tool, which may limit the size of DEM that can be processed, depending on available system resources.

See Also

average_upslope_flowpath_length, breach_depressions_least_cost

Function Signature

def breach_single_cell_pits(self, dem: Raster) -> Raster: ...

buffer_raster

This tool can be used to identify an area of interest within a specified distance of features of interest in a raster data set.

The Euclidean distance (i.e. straight-line distance) is calculated between each grid cell and the nearest 'target cell' in the input image. Distance is calculated using the efficient method of Shih and Wu (2004). Target cells are all non-zero, non-NoData grid cells. Because NoData values in the input image are assigned the NoData value in the output image, the only valid background value in the input image is zero.

The user must specify the input and output image names, the desired buffer size (size), and, optionally, whether the distance units are measured in grid cells (i.e. gridcells flag). If the gridcells flag is not specified, the linear units of the raster's coordinate reference system will be used.

Reference

Shih FY and Wu Y-T (2004), Fast Euclidean distance transformation in two scans using a 3 x 3 neighborhood, Computer Vision and Image Understanding, 93: 195-205.

See Also

euclidean_distance

Function Signature

def buffer_raster(self, input: Raster, buffer_size: float, grid_cells_units: bool = False) -> Raster: ...

burn_streams_at_roads

This tool decrements (lowers) the elevations of pixels within an input digital elevation model (DEM) (dem) along an input vector stream network (streams) at the sites of road (roads) intersections. In addition to the input data layers, the user must specify the output raster DEM (output), and the maximum road embankment width (width), in map units. The road width parameter is used to determine the length of channel along stream lines, at the junctions between streams and roads, that the burning (i.e. decrementing) operation occurs. The algorithm works by identifying stream-road intersection cells, then traversing along the rasterized stream path in the upstream and downstream directions by half the maximum road embankment width. The minimum elevation in each stream traversal is identified and then elevations that are higher than this value are lowered to the minimum elevation during a second stream traversal.

Reference

Lindsay JB. 2016. The practice of DEM stream burning revisited. Earth Surface Processes and Landforms, 41(5): 658–668. DOI: 10.1002/esp.3888

See Also

raster_streams_to_vector, rasterize_streams

Function Signature

def burn_streams_at_roads(self, dem: Raster, streams: Vector, roads: Vector, road_width: float) -> Raster: ...

centroid_raster

This tool calculates the centroid, or average location, of raster polygon objects. For vector features, use the centroid_vector tool instead.

See Also

centroid_vector

Function Signature

def centroid_raster(self, input: Raster) -> Tuple[Raster, str]: ...

centroid_vector

This can be used to identify the centroid point of a vector polyline or polygon feature or a group of vector points. The output is a vector shapefile of points. For multi-part polyline or polygon features, the user can optionally specify whether to identify the centroid of each part. The default is to treat multi-part features a single entity.

For raster features, use the Centroid tool instead.

See Also

Centroid, medoid

Function Signature

def centroid_vector(self, input: Vector) -> Vector: ...

change_vector_analysis

Change Vector Analysis (CVA) is a change detection method that characterizes the magnitude and change direction in spectral space between two times. A change vector is the difference vector between two vectors in n-dimensional feature space defined for two observations of the same geographical location (i.e. corresponding pixels) during two dates. The CVA inputs include the set of raster images corresponding to the multispectral data for each date. Note that there must be the same number of image files (bands) for the two dates and they must be entered in the same order, i.e. if three bands, red, green, and blue are entered for date one, these same bands must be entered in the same order for date two.

CVA outputs two image files. The first image contains the change vector length, i.e. magnitude, for each pixel in the multi-spectral dataset. The second image contains information about the direction of the change event in spectral feature space, which is related to the type of change event, e.g. deforestation will likely have a different change direction than say crop growth. The vector magnitude is a continuous numerical variable. The change vector direction is presented in the form of a code, referring to the multi-dimensional sector in which the change vector occurs. A text output will be produced to provide a key describing sector codes, relating the change vector to positive or negative shifts in n-dimensional feature space.

It is common to apply a simple thresholding operation on the magnitude data to determine 'actual' change (i.e. change above some assumed level of error). The type of change (qualitatively) is then defined according to the corresponding sector code. Jensen (2015) provides a useful description of this approach to change detection.

Reference

Jensen, J. R. (2015). Introductory Digital Image Processing: A Remote Sensing Perspective.

See Also

write_function_memory_insertion

Function Signature

def change_vector_analysis(self, date1_rasters: List[Raster], date2_rasters: List[Raster]) -> Tuple[Raster, Raster, str]: ...

check_in_license

Check-in your floating license.

circular_variance_of_aspect

This tool can be used to calculate the circular variance (i.e. one minus the mean resultant length) of aspect for a digital elevation model (DEM). This is a measure of how variable slope aspect is within a local neighbourhood of a specified size (filter). circular_variance_of_aspect is therefore a measure of surface shape complexity, or texture. It will take a value of 0.0 for smooth sites and near 1.0 in areas of high surface roughness or complex topography.

The local neighbourhood size (filter) must be any odd integer equal to or greater than three. Grohmann et al. (2010) found that vector dispersion, a related measure of angular variance, increases monotonically with scale. This is the result of the angular dispersion measure integrating (accumulating) all of the surface variance of smaller scales up to the test scale. A more interesting scale relation can therefore be estimated by isolating the amount of surface complexity associated with specific scale ranges. That is, at large spatial scales, the metric should reflect the texture of large-scale landforms rather than the accumulated complexity at all smaller scales, including microtopographic roughness. As such, this tool normalizes the surface complexity of scales that are smaller than the filter size by applying Gaussian blur (with a standard deviation of one-third the filter size) to the DEM prior to calculating circular_variance_of_aspect. In this way, the resulting distribution is able to isolate and highlight the surface shape complexity associated with landscape features of a similar scale to that of the filter size.

This tool makes extensive use of integral images (i.e. summed-area tables) and parallel processing to ensure computational efficiency. It may, however, require substantial memory resources when applied to larger DEMs.

References

Grohmann, C. H., Smith, M. J., & Riccomini, C. (2010). Multiscale analysis of topographic surface roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing, 49(4), 1200-1213.

See Also

aspect, spherical_std_dev_of_normals, multiscale_roughness, edge_density, surface_area_ratio, ruggedness_index

Function Signature

def circular_variance_of_aspect(self, dem: Raster, filter_size: int = 11) -> Raster: ...

classify_buildings_in_lidar

This tool can be used to assign the building class (classification value 6) to all points within an input LiDAR point cloud (input) that are contained within the polygons of an input buildings footprint vector (buildings). The tool performs a simple point-in-polygon operation to determine membership. The two inputs (i.e. the LAS file and vector) must share the same map projection. Furthermore, any error in the definition of the building footprints will result in misclassified points in the output LAS file (output). In particular, if the footprints extend slightly beyond the actual building, ground points situated adjacent to the building will be incorrectly classified. Thus, care must be taken in digitizing building footprint polygons. Furthermore, where there are tall trees that overlap significantly with the building footprint, these vegetation points will also be incorrectly assigned the building class value.

See Also

filter_lidar_classes, lidar_ground_point_filter, clip_lidar_to_polygon

Function Signature

def classify_buildings_in_lidar(self, in_lidar: Lidar, building_footprints: Vector) -> Lidar: ...

classify_overlap_points

This tool can be used to flag points within an input LiDAR file (input) that overlap with other nearby points from different flightlines, i.e. to identify overlap points. The flightline associated with a LiDAR point is assumed to be contained within the point's Point Source ID (PSID) property. If the PSID property is not set, or has been lost, users may with to apply the recover_flightline_info tool prior to running flightline_overlap.

Areas of multiple flightline overlap tend to have point densities that are far greater than areas of single flightlines. This can produce suboptimal results for applications that assume regular point distribution, e.g. in point classification operations.

The tool works by applying a square grid over the extent of the input LiDAR file. The grid cell size is determined by the user-defined resolution parameter. Grid cells containing multiple PSIDs, i.e. with more than one flightline, are then identified. Overlap points within these grid cells can then be flagged on the basis of a user-defined criterion. The flagging options include the following:

CriterionOverlap Point Definition
max scan angleAll points that share the PSID of the point with the maximum absolute scan angle
not min point source IDAll points with a different PSID to that of the point with the lowest PSID
not min timeAll points with a different PSID to that of the point with the minimum GPS time
multiple point source IDsAll points in grid cells with multiple PSIDs, i.e. all overlap points.

Note that the max scan angle criterion may not be appropriate when more than two flightlines overlap, since it will result in only flagging points from one of the multiple flightlines.

It is important to set the resolution parameter appropriately, as setting this value too high will yield the filtering of points in non-overlap areas, and setting the resolution to low will result in fewer than expected points being flagged. An appropriate resolution size value may require experimentation, however a value that is 2-3 times the nominal point spacing has been previously recommended. The nominal point spacing can be determined using the lidar_info tool.

By default, all flagged overlap points are reclassified in the output LiDAR file (output) to class 12. Alternatively, if the user specifies the filter parameter, then each overlap point will be excluded from the output file. Classified overlap points may also be filtered from LiDAR point clouds using the filter_lidar tool.

Note that this tool is intended to be applied to LiDAR tile data containing points that have been merged from multiple overlapping flightlines. It is commonly the case that airborne LiDAR data from each of the flightlines from a survey are merged and then tiled into 1 km2 tiles, which are the target dataset for this tool.

See Also

flightline_overlap, recover_flightline_info, filter_lidar, lidar_info

Function Signature

def classify_overlap_points(self, in_lidar: Lidar, resolution: float = 1.0, overlap_criterion: str = "max scan angle", filter: bool = False) -> Lidar: ...

clean_vector

Description

This tool can be used to remove all features in Shapefiles that are of the null VectorGeometryType. It also removes line features with fewer than two vertices and polygon features with fewer than three vertices.

Parameters

input (Vector): The input Vector object

Returns

Vector: the returning value

Function Signature

def clean_vector(self, input: Vector) -> Vector: ...

clip

This tool will extract all the features, or parts of features, that overlap with the features of the clip vector file. The clipping operation is one of the most common vector overlay operations in GIS and effectively imposes the boundary of the clip layer on a set of input vector features, or target features. The operation is sometimes likened to a 'cookie-cutter'. The input vector file can be of any feature type (i.e. points, lines, polygons), however, the clip vector must consist of polygons.

See Also

erase

Function Signature

def clip(self, input: Vector, clip_layer: Vector) -> Vector: ...

clip_lidar_to_polygon

This tool can be used to isolate, or clip, all of the LiDAR points in a LAS file (input) contained within one or more vector polygon features. The user must specify the name of the input clip file (--polygons), which must be a vector of a Polygon base shape type. The clip file may contain multiple polygon features and polygon hole parts will be respected during clipping, i.e. LiDAR points within polygon holes will be removed from the output LAS file.

Use the erase_polygon_from_lidar tool to perform the complementary operation of removing points from a LAS file that are contained within a set of polygons.

See Also

erase_polygon_from_lidar, filter_lidar, clip, clip_raster_to_polygon

Function Signature

def clip_lidar_to_polygon(self, input: Lidar, polygons: Vector) -> Lidar: ...

clip_raster_to_polygon

This tool can be used to clip an input raster (input) to the extent of a vector polygon (shapefile). The user must specify the name of the input clip file (polygons), which must be a vector of a Polygon base shape type. The clip file may contain multiple polygon features. Polygon hole parts will be respected during clipping, i.e. polygon holes will be removed from the output raster by setting them to a NoData background value. Raster grid cells that fall outside of a polygons in the clip file will be assigned the NoData background value in the output file. By default, the output raster will be cropped to the spatial extent of the clip file, unless the maintain_dimensions parameter is used, in which case the output grid extent will match that of the input raster. The grid resolution of output raster is the same as the input raster.

It is very important that the input raster and the input vector polygon file share the same projection. The result is unlikely to be satisfactory otherwise.

See Also

erase_polygon_from_raster

Function Signature

def clip_raster_to_polygon(self, raster: Raster, polygons: Vector, maintain_dimensions: bool = False) -> Raster: ...

closing

This tool performs a closing operation on an input greyscale image (input). A closing is a mathematical morphology operation involving an erosion (minimum filter) of a dilation (maximum filter) set. closing operations, together with the opening operation, is frequently used in the fields of computer vision and digital image processing for image noise removal. The user must specify the size of the moving window in both the x and y directions (filterx and filtery).

See Also

opening, tophat_transform

Function Signature

def closing(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

clump

This tool re-categorizes data in a raster image by grouping cells that form discrete, contiguous areas into unique categories. Essentially this will produce a patch map from an input categorical raster, assigning each feature unique identifiers. The input raster should either be Boolean (1's and 0's) or categorical. The input raster could be created using the reclass tool or one of the comparison operators (GreaterThan, LessThan, EqualTo, NotEqualTo). Use the treat zeros as background cells options (zero_back) if you would like to only assigned contiguous groups of non-zero values in the raster unique identifiers. Additionally, inter-cell connectivity can optionally include diagonally neighbouring cells if the diag flag is specified.

See Also

reclass, GreaterThan, LessThan, EqualTo, NotEqualTo

Function Signature

def clump(self, raster: Raster, diag: bool = False, zero_background: bool = False) -> Raster: ...

compactness_ratio

The compactness ratio is an indicator of polygon shape complexity. The compactness ratio is defined as the polygon area divided by its perimeter. Unlike some other shape parameters (e.g. ShapeComplexityIndex), compactness ratio does not standardize to a simple Euclidean shape. Although widely used for landscape analysis, compactness ratio, like its inverse, the perimeter_area_ratio, exhibits the undesirable property of polygon size dependence (Mcgarigal et al. 2002). That is, holding shape constant, an increase in polygon size will cause a change in the compactness ratio.

The output data will be contained in the input vector's attribute table as a new field (COMPACT).

See Also

perimeter_area_ratio, ShapeComplexityIndex, related_circumscribing_circle

Function Signature

def compactness_ratio(self, input: Vector) -> Vector: ...

conservative_smoothing_filter

This tool performs a conservative smoothing filter on a raster image. A conservative smoothing filter can be used to remove short-range variability in an image, effectively acting to smooth the image. It is particularly useful for eliminating local spikes and reducing the noise in an image. The algorithm operates by calculating the minimum and maximum neighbouring values surrounding a grid cell. If the cell at the centre of the kernel is greater than the calculated maximum value, it is replaced with the maximum value in the output image. Similarly, if the cell value at the kernel centre is less than the neighbouring minimum value, the corresponding grid cell in the output image is replaced with the minimum value. This filter tends to alter an image very little compared with other smoothing filters such as the mean_filter, edge_preserving_mean_filter, bilateral_filter, median_filter, gaussian_filter, or olympic_filter.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

mean_filter, edge_preserving_mean_filter, bilateral_filter, median_filter, gaussian_filter, olympic_filter

Function Signature

def conservative_smoothing_filter(self, raster: Raster, filter_size_x: int = 3, filter_size_y: int = 3) -> Raster: ...

construct_vector_tin

This tool creates a vector triangular irregular network (TIN) for a set of vector points (input) using a 2D Delaunay triangulation algorithm. TIN vertex heights can be assigned based on either a field in the vector's attribute table (field), or alternatively, if the vector is of a z-dimension VectorGeometryTypeDimension, the point z-values may be used for vertex heights (use_z). For LiDAR points, use the lidar_construct_vector_tin tool instead.

Triangulation often creates very long, narrow triangles near the edges of the data coverage, particularly in convex regions along the data boundary. To avoid these spurious triangles, the user may optionally specify the maximum allowable edge length of a triangular facet (max_triangle_edge_length).

See Also

lidar_construct_vector_tin

Function Signature

def construct_vector_tin(self, input_points: Vector, field_name: str = "FID", use_z: bool = False, max_triangle_edge_length: float = float('inf')) -> Vector: ...

contours_from_points

This tool creates a contour coverage from a set of input points (input). The user must specify the contour interval (interval) and optionally, the base contour value (base). The degree to which contours are smoothed is controlled by the Smoothing Filter Size parameter (smooth). This value, which determines the size of a mean filter applied to the x-y position of vertices in each contour, should be an odd integer value, e.g. 3, 5, 7, 9, 11, etc. Larger values will result in smoother contour lines.

See Also

contours_from_raster

Function Signature

def contours_from_points(self, input: Vector, field_name: str = "", use_z_values: bool = False, max_triangle_edge_length: float = float('inf'), contour_interval: float = 10.0, base_contour: float = 0.0, smoothing_filter_size: int = 9) -> Vector: ...

contours_from_raster

This tool can be used to create a vector contour coverage from an input raster surface model (input), such as a digital elevation model (DEM). The user must specify the contour interval (interval) and optionally, the base contour value (base). The degree to which contours are smoothed is controlled by the Smoothing Filter Size parameter (smooth). This value, which determines the size of a mean filter applied to the x-y position of vertices in each contour, should be an odd integer value, e.g. 3, 5, 7, 9, 11, etc. Larger values will result in smoother contour lines. The tolerance parameter (tolerance) controls the amount of line generalization. That is, vertices in a contour line will be selectively removed from the line if they do not result in an angular deflection in the line's path of at least this threshold value. Increasing this value can significantly decrease the size of the output contour vector file, at the cost of generating straighter contour line segments.

See Also

raster_to_vector_polygons

Function Signature

def contours_from_raster(self, raster_surface: Raster, contour_interval: float = 10.0, base_contour: float = 0.0, smoothing_filter_size: int = 9, deflection_tolerance: float = 10.0) -> Vector: ...

convert_nodata_to_zero

Description

This tool can be used to change the value within the grid cells of a raster (input) that contain NoData to zero. The most common reason for using this tool is to change the background region of a raster image such that it can be included in analysis since NoData values are usually ignored by by most tools. This change, however, will result in the background no longer displaying transparently in most GIS. This change can be reversed using the set_nodata_value tool.

See Also

set_nodata_value, Raster.is_nodata

Parameters

raster (Raster): The input Raster object

Returns

Raster: the returning value

Function Signature

def convert_nodata_to_zero(self, raster: Raster) -> Raster: ...

corner_detection

This tool identifies corner patterns in boolean images using hit-and-miss pattern matching. Foreground pixels in the input image (input) are designated by any positive, non-zero values. Zero-valued and NoData-valued grid cells are interpreted by the algorithm as background values.

Reference

Fisher, R, Brown, N, Cammas, N, Fitzgibbon, A, Horne, S, Koryllos, K, Murdoch, A, Robertson, J, Sharman, T, Strachan, C, 2004. Hypertext Image Processing Resource. online: http://homepages.inf.ed.ac.uk/rbf/HIPR2/hitmiss.htm

Function Signature

def corner_detection(self, raster: Raster) -> Raster: ...

correct_vignetting

This tool can be used to reduce vignetting within an image. Vignetting refers to the reduction of image brightness away from the image centre (i.e. the principal point). Vignetting is a radiometric distortion resulting from lens characteristics. The algorithm calculates the brightness value in the output image (BVout) as:

BVout = BVin / [cos^n(arctan(d / f))]

Where d is the photo-distance from the principal point in millimetres, f is the focal length of the camera, in millimeters, and n is a user-specified parameter. Pixel distances are converted to photo-distances (in millimetres) using the specified image width, i.e. distance between left and right edges (mm). For many cameras, 4.0 is an appropriate value of the n parameter. A second pass of the image is used to rescale the output image so that it possesses the same minimum and maximum values as the input image.

If an RGB image is input, the analysis will be performed on the intensity component of the HSI transform.

Function Signature

def correct_vignetting(self, image: Raster, principal_point: Vector, focal_length: float = 304.8, image_width: float = 228.6, n_param: float = 4.0) -> Raster: ...

cost_allocation

This tool can be used to identify the 'catchment area' of each source grid cell in a cost-distance analysis. The user must specify the names of the input source and back-link raster files. Source cells (i.e. starting points for the cost-distance or least-cost path analysis) are designated as all positive, non-zero valued grid cells in the source raster. A back-link raster file can be created using the cost_distance tool and is conceptually similar to the D8 flow-direction pointer raster grid in that it describes the connectivity between neighbouring cells on the accumulated cost surface.

NoData values in the input back-link image are assigned NoData values in the output image.

See Also

cost_distance, cost_pathway, euclidean_allocation

Function Signature

def cost_allocation(self, source: Raster, backlink: Raster) -> Raster: ...

cost_distance

This tool can be used to perform cost-distance or least-cost pathway analyses. Specifically, this tool can be used to calculate the accumulated cost of traveling from the 'source grid cell' to each other grid cell in a raster dataset. It is based on the costs associated with traveling through each cell along a pathway represented in a cost (or friction) surface. If there are multiple source grid cells, each cell in the resulting cost-accumulation surface will reflect the accumulated cost to the source cell that is connected by the minimum accumulated cost-path. The user must specify the names of the raster file containing the source cells (source), the raster file containing the cost surface information (cost), the output cost-accumulation surface raster (out_accum), and the output back-link raster (out_backlink). Source cells are designated as all positive, non-zero valued grid cells in the source raster. The cost (friction) raster can be created by combining the various cost factors associated with the specific problem (e.g. slope gradient, visibility, etc.) using a raster calculator or the weighted_overlay tool.

While the cost-accumulation surface raster can be helpful for visualizing the three-dimensional characteristics of the 'cost landscape', it is actually the back-link raster that is used as inputs to the other two cost-distance tools, cost_allocation and cost_pathway, to determine the least-cost linkages among neighbouring grid cells on the cost surface. If the accumulated cost surface is analogous to a digital elevation model (DEM) then the back-link raster is equivalent to the D8 flow-direction pointer. In fact, it is created in a similar way and uses the same convention for designating 'flow directions' between neighbouring grid cells. The algorithm for the cost distance accumulation operation uses a type of priority-flood method similar to what is used for depression filling and flow accumulation operations.

NoData values in the input cost surface image are ignored during processing and assigned NoData values in the outputs. The output cost accumulation raster is of the float data type and continuous data scale.

See Also

cost_allocation, cost_pathway, weighted_overlay

Function Signature

def cost_distance(self, source: Raster, cost: Raster) -> Tuple[Raster, Raster]: ...

cost_pathway

This tool can be used to map the least-cost pathway connecting each destination grid cell in a cost-distance analysis to a source cell. The user must specify the names of the input destination and back-link raster files. Destination cells (i.e. end points for the least-cost path analysis) are designated as all positive, non-zero valued grid cells in the destination raster. A back-link raster file can be created using the cost_distance tool and is conceptually similar to the D8 flow-direction pointer raster grid in that it describes the connectivity between neighbouring cells on the accumulated cost surface. All background grid cells in the output image are assigned the NoData value.

NoData values in the input back-link image are assigned NoData values in the output image.

See Also

cost_distance, cost_allocation

Function Signature

def cost_pathway(self, destination: Raster, backlink: Raster, zero_background: bool = False) -> Raster: ...

count_if

This tool counts the number of occurrences of a specified value (value) in a stack of input rasters (inputs). Each grid cell in the output raster (output) will contain the number of occurrences of the specified value in the stack of corresponding cells in the input image. At least two input rasters are required to run this tool. Each of the input rasters must share the same number of rows and columns and spatial extent. An error will be issued if this is not the case.

See Also

pick_from_list

Function Signature

def count_if(self, input_rasters: List[Raster], comparison_value: float) -> Raster: ...

create_colour_composite

This tool can be used to create a colour-composite image from three bands of multi-spectral imagery. The user must input images to enter into the red, green, and blue channels of the resulting composite image. The output image uses the 32-bit aRGB colour model, and therefore, in addition to red, green and blue bands, the user may optionally specify a fourth image that will be used to determine pixel opacity (the 'a' channel). If no opacity image is specified, each pixel will be opaque. This can be useful for cropping an image to an irregular-shaped boundary. The opacity channel can also be used to create transparent gradients in the composite image.

A balance contrast enhancement (BCE) can optionally be performed on the bands prior to creation of the colour composite. While this operation will add to the runtime of create_colour_composite, if the individual input bands have not already had contrast enhancements, then it is advisable that the BCE option be used to improve the quality of the resulting colour composite image.

NoData values in any of the input images are assigned NoData values in the output image and are not taken into account when performing the BCE operation. Please note, not all images have NoData values identified. When this is the case, and when the background value is 0 (often the case with multispectral imagery), then the create_colour_composite tool can be told to ignore zero values using the zeros flag.

See Also

balance_contrast_enhancement, split_colour_composite

Function Signature

def create_colour_composite(self, red: Raster, green: Raster, blue: Raster, opacity: Raster = None, enhance: bool = True, treat_zeros_as_nodata: bool = False) -> Raster: ...

create_plane

This tool can be used to create a new raster with values that are determined by the equation of a simple plane. The user must specify the name of a base raster (base) from which the output raster coordinate and dimensional information will be taken. In addition the user must specify the values of the planar slope gradient (S; gradient; aspect) in degrees, the planar slope direction or aspect (A; 0 to 360 degrees), and an constant value (k; constant). The equation of the plane is as follows:

Z = tan(S) × sin(A - 180) × X + tan(S) × cos(A - 180) × Y + k

where X and Y are the X and Y coordinates of each grid cell in the grid. Notice that A is the direction, or azimuth, that the plane is facing

Function Signature

def create_plane(self, base_file: Raster, gradient: float, aspect: float, constant: float) -> Raster: ...

crispness_index

The Crispness Index (C) provides a means of quantifying the crispness, or fuzziness, of a membership probability (MP) image. MP images describe the probability of each grid cell belonging to some feature or class. MP images contain values ranging from 0 to 1.

The index, as described by Lindsay (2006), is the ratio between the sum of the squared differences (from the image mean) in the MP image divided by the sum of the squared differences for the Boolean case in which the total probability, summed for the image, is arranged crisply.

C is closely related to a family of relative variation coefficients that measure variation in an MP image relative to the maximum possible variation (i.e. when the total probability is arranged such that grid cells contain only 1s or 0s). Notice that 0 < C < 1 and a low C-value indicates a nearly uniform spatial distribution of any probability value, and C = 1 indicates a crisp spatial probability distribution, containing only 1's and 0's.

C is calculated as follows:

C = SS_mp ∕ SS_B = [∑(pij − p-bar)^2] ∕ [ ∑pij(1 − p-bar)^2 + p2(RC − ∑pij)]

Note that there is an error in the original published equation. Specifically, the denominator read:

∑pij(1 - p_bar)^2 + p_bar^2 (RC - ∑pij)

instead of the original:

∑pij(1 - p_bar^2) - p_bar^2 (RC - ∑pij)

References

Lindsay, J. B. (2006). Sensitivity of channel mapping techniques to uncertainty in digital elevation data. International Journal of Geographical Information Science, 20(6), 669-692.

Function Signature

def crispness_index(self, raster: Raster, output_html_file: str) -> None: ...

cross_tabulation

This tool can be used to perform a cross-tabulation on two input raster images (i1 and i2) containing categorical data, i.e. classes. It will output a contingency table in HTML format (output). A contingency table, also known as a cross tabulation or crosstab, is a type of table that displays the multivariate frequency distribution of the variables. These tables provide a basic picture of the interrelation between two categorical variables and can help find interactions between them. cross_tabulation can provide useful information about the nature of land-use/land-cover (LULC) changes between two dates of classified multi-spectral satellite imagery. For example, the extent of urban expansion could be described using the information about the extent of pixels in an 'urban' class in Date 2 that were previously assigned to other classes (e.g. agricultural LULC categories) in the Date 1 imagery.

Both input images must share the same grid, as the analysis requires a comparison of a pair of images on a cell-by-cell basis. If a grid cell contains a NoData value in either of the input images, the cell will be excluded from the analysis.

Function Signature

def cross_tabulation(self, raster1: Raster, raster2: Raster, output_html_file: str) -> None: ...

csv_points_to_vector

This tool can be used to import a series of points contained within a comma-separated values (*.csv) file (input_file) into a vector shapefile of a POINT VectorGeometryType. The input file must be an ASCII text file with a .csv extensions. The tool will automatically detect the field data type; for numeric fields, it will also determine the appropriate length and precision. The user must specify the x-coordinate (x_field_num) and y-coordiante (y_field_num) fields. All fields are imported as attributes in the output (output) vector file. The tool assumes that the first line of the file is a header line from which field names are retrieved.

See Also

merge_table_with_csv, export_table_to_csv

Function Signature

def csv_points_to_vector(self, input_file: str, x_field_num: int = 0, y_field_num: int = 1, epsg: int = 0) -> Vector: ...

cumulative_distribution

This tool converts the values in an input image (input) into a cumulative distribution function. Therefore, the output raster (output) will contain the cumulative probability value (0-1) of of values equal to or less than the value in the corresponding grid cell in the input image. NoData values in the input image are not considered during the transformation and remain NoData values in the output image.

See Also

z_scores

Function Signature

def cumulative_distribution(self, raster: Raster) -> Raster: ...

d8_flow_accum

This tool is used to generate a flow accumulation grid (i.e. catchment area) using the D8 (O'Callaghan and Mark, 1984) algorithm. This algorithm is an example of single-flow-direction (SFD) method because the flow entering each grid cell is routed to only one downslope neighbour, i.e. flow divergence is not permitted. The user must specify the name of the input digital elevation model (DEM) or flow pointer raster (input) derived using the D8 or Rho8 method (d8_pointer, rho8_pointer). If an input DEM is used, it must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using the breach_depressions_least_cost or fill_depressions tools. If a D8 pointer raster is input, the user must also specify the optional pntr flag. If the D8 pointer follows the Esri pointer scheme, rather than the default WhiteboxTools scheme, the user must also specify the optional esri_pntr flag.

In addition to the input DEM/pointer, the user must specify the output type. The output flow-accumulation can be 1) cells (i.e. the number of inflowing grid cells), catchment area (i.e. the upslope area), or specific contributing area (i.e. the catchment area divided by the flow width. The default value is cells. The user must also specify whether the output flow-accumulation grid should be log-tranformed (log), i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated flow value. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index, or relative stream power index.

Grid cells possessing the NoData value in the input DEM/pointer raster are assigned the NoData value in the output flow-accumulation image.

Reference

O'Callaghan, J. F., & Mark, D. M. 1984. The extraction of drainage networks from digital elevation data. Computer Vision, Graphics, and Image Processing, 28(3), 323-344.

See Also:

FD8FlowAccumulation, quinn_flow_accumulation, qin_flow_accumulation, DInfFlowAccumulation, MDInfFlowAccumulation, rho8_pointer, d8_pointer, breach_depressions_least_cost, fill_depressions

Function Signature

def d8_flow_accum(self, raster: Raster, out_type: str = "sca", log_transform: bool = False, clip: bool = False, input_is_pointer: bool = False, esri_pntr: bool = False) -> Raster: ...

d8_mass_flux

This tool can be used to perform a mass flux calculation using DEM-based surface flow-routing techniques. For example, it could be used to model the distribution of sediment or phosphorous within a catchment. Flow-routing is based on a D8 flow pointer (i.e. flow direction) derived from an input depresionless DEM (dem). The user must also specify the names of loading (loading), efficiency (efficiency), and absorption (absorption) rasters, as well as the output raster. Mass Flux operates very much like a flow-accumulation operation except that rather than accumulating catchment areas the algorithm routes a quantity of mass, the spatial distribution of which is specified within the loading image. The efficiency and absorption rasters represent spatial distributions of losses to the accumulation process, the difference being that the efficiency raster is a proportional loss (e.g. only 50% of material within a particular grid cell will be directed downslope) and the absorption raster is an loss specified as a quantity in the same units as the loading image. The efficiency image can range from 0 to 1, or alternatively, can be expressed as a percentage. The equation for determining the mass sent from one grid cell to a neighbouring grid cell is:

Outflowing Mass = (Loading - Absorption + Inflowing Mass) × Efficiency

This tool assumes that each of the three input rasters have the same number of rows and columns and that any NoData cells present are the same among each of the inputs.

See Also

DInfMassFlux

Function Signature

def d8_mass_flux(self, dem: Raster, loading: Raster, efficiency: Raster, absorption: Raster) -> Raster: ...

d8_pointer

This tool is used to generate a flow pointer grid using the simple D8 (O'Callaghan and Mark, 1984) algorithm. The user must specify the name (dem) of a digital elevation model (DEM) that has been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost or fill_depressions tool. The local drainage direction raster output (output) by this tool serves as a necessary input for several other spatial hydrology and stream network analysis tools in the toolset. Some tools will calculate this flow pointer raster directly from the input DEM.

By default, D8 flow pointers use the following clockwise, base-2 numeric index convention:

...
641281
3202
1684

Notice that grid cells that have no lower neighbours are assigned a flow direction of zero. In a DEM that has been pre-processed to remove all depressions and flat areas, this condition will only occur along the edges of the grid. If the pointer file contains ESRI flow direction values instead, the esri_pntr parameter must be specified.

Grid cells possessing the NoData value in the input DEM are assigned the NoData value in the output image.

Memory Usage

The peak memory usage of this tool is approximately 10 bytes per grid cell.

Reference

O'Callaghan, J. F., & Mark, D. M. (1984). The extraction of drainage networks from digital elevation data. Computer vision, graphics, and image processing, 28(3), 323-344.

See Also

DInfPointer, fd8_pointer, breach_depressions_least_cost, fill_depressions

Function Signature

def d8_pointer(self, dem: Raster, esri_pointer: bool = False) -> Raster: ...

depth_in_sink

This tool measures the depth that each grid cell in an input (dem) raster digital elevation model (DEM) lies within a sink feature, i.e. a closed topographic depression. A sink, or depression, is a bowl-like landscape feature, which is characterized by interior drainage and groundwater recharge. The depth_in_sink tool operates by differencing a filled DEM, using the same depression filling method as fill_depressions, and the original surface model.

In addition to the names of the input DEM (dem) and the output raster (output), the user must specify whether the background value (i.e. the value assigned to grid cells that are not contained within sinks) should be set to 0.0 (zero_background) Without this optional parameter specified, the tool will use the NoData value as the background value.

Reference

Antonić, O., Hatic, D., & Pernar, R. (2001). DEM-based depth in sink as an environmental estimator. Ecological Modelling, 138(1-3), 247-254.

See Also

fill_depressions

Function Signature

def depth_in_sink(self, dem: Raster, zero_background: bool = False) -> Raster: ...

deviation_from_mean_elevation

This tool can be used to calculate the difference between the elevation of each grid cell and the mean elevation of the centering local neighbourhood, normalized by standard deviation. Therefore, this index of topographic residual is essentially equivalent to a local z-score. This attribute measures the relative topographic position as a fraction of local relief, and so is normalized to the local surface roughness. DevFromMeanElev utilizes an integral image approach (Crow, 1984) to ensure highly efficient filtering that is invariant with filter size.

The user must input a digital elevation model (DEM) (dem) and the size of the neighbourhood in the x and y directions (filterx and filtery), measured in grid size.

While DeviationFromMeanElev calculates the deviation from mean elevation (DEV) at a single, user-defined scale, the max_elevation_deviation tool can be used to output the per-pixel maximum DEV value across a range of input scales.

See Also

DiffFromMeanElev, max_elevation_deviation

Function Signature

def deviation_from_mean_elevation(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

deviation_from_regional_direction

This tool calculates the degree to which each polygon in an input shapefile (input) deviates from the average, or regional, direction. The input file will have a new attribute inserted in the attribute table, DEV_DIR, which will contain the calculated values. The deviation values are in degrees. The orientation of each polygon is determined based on the long-axis of the minimum bounding box fitted to the polygon. The regional direction is based on the mean direciton of the polygons, weighted by long-axis length (longer polygons contribute more weight) and elongation, i.e., a function of the long and short axis lengths (greater elongation contributes more weight). Polygons with elongation values lower than the elongation threshold value (elongation_threshold), which has values between 0 and 1, will be excluded from the calculation of the regional direction.

See Also

patch_orientation, elongation_ratio

Function Signature

def deviation_from_regional_direction(self, input: Vector, elongation_threshold: float = 0.75) -> Vector: ...

diff_of_gaussians_filter

This tool can be used to perform a difference-of-Gaussians (DoG) filter on a raster image. In digital image processing, DoG is a feature enhancement algorithm that involves the subtraction of one blurred version of an image from another, less blurred version of the original. The blurred images are obtained by applying filters with Gaussian-weighted kernels of differing standard deviations to the input image (input). Blurring an image using a Gaussian-weighted kernel suppresses high-frequency spatial information and emphasizes lower-frequency variation. Subtracting one blurred image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference-of-Gaussians is a band-pass filter that discards all but a specified range of spatial frequencies that are present in the original image.

The algorithm operates by differencing the results of convolving two kernels of weights with each grid cell and its neighbours in an image. The weights of the convolution kernels are determined by the 2-dimensional Gaussian (i.e. normal) curve, which gives stronger weighting to cells nearer the kernel centre. The size of the two convolution kernels are determined by setting the two standard deviation parameters (sigma1 and sigma2); the larger the standard deviation the larger the resulting filter kernel. The second standard deviation should be a larger value than the first, however if this is not the case, the tool will automatically swap the two parameters. Both standard deviations can range from 0.5-20.

The difference-of-Gaussians filter can be used to emphasize edges present in an image. Other edge-sharpening filters also operate by enhancing high-frequency detail, but because random noise also has a high spatial frequency, many of these sharpening filters tend to enhance noise, which can be an undesirable artifact. The difference-of-Gaussians filter can remove high-frequency noise while emphasizing edges. This filter can, however, reduce overall image contrast.

See Also

gaussian_filter, fast_almost_gaussian_filter, laplacian_filter, LaplacianOfGaussianFilter`

Function Signature

def diff_of_gaussians_filter(self, raster: Raster, sigma1: float = 2.0, sigma2: float = 4.0) -> Raster: ...

difference

This tool will remove all the overlapping features, or parts of overlapping features, between input and overlay vector files, outputting only the features that occur in one of the two inputs but not both. The Symmetrical Difference is related to the Boolean exclusive-or (XOR) operation in set theory and is one of the common vector overlay operations in GIS. The user must specify the names of the input and overlay vector files as well as the output vector file name. The tool operates on vector points, lines, or polygon, but both the input and overlay files must contain the same VectorGeometryType.

The Symmetrical Difference can also be derived using a combination of other vector overlay operations, as either (A union B) difference (A intersect B), or (A difference B) union (B difference A).

The attributes of the two input vectors will be merged in the output attribute table. Fields that are duplicated between the inputs will share a single attribute in the output. Fields that only exist in one of the two inputs will be populated by null in the output table. Multipoint VectorGeometryTypes however will simply contain a single output feature identifier (FID) attribute. Also, note that depending on the VectorGeometryType (polylines and polygons), Measure and Z ShapeDimension data will not be transferred to the output geometries. If the input attribute table contains fields that measure the geometric properties of their associated features (e.g. length or area), these fields will not be updated to reflect changes in geometry shape and size resulting from the overlay operation.

See Also

intersect, difference, union, clip, erase

Function Signature

def difference(self, input: Vector, overlay: Vector) -> Vector: ...

difference_from_mean_elevation

This tool can be used to calculate the difference between the elevation of each grid cell and the mean elevation of the centering local neighbourhood. This is similar to what a high-pass filter calculates for imagery data, but is intended to work with DEM data instead. This attribute measures the relative topographic position. DiffFromMeanElev utilizes an integral image approach (Crow, 1984) to ensure highly efficient filtering that is invariant with filter size.

The user must specify a digital elevation model (DEM) (dem) , and the size of the neighbourhood in the x and y directions (filterx and filtery), measured in grid size.

While DevFromMeanElev calculates the DIFF at a single, user-defined scale, the max_difference_from_mean tool can be used to output the per-pixel maximum DIFF value across a range of input scales.

See Also

DevFromMeanElev, max_difference_from_mean

Function Signature

def difference_from_mean_elevation(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

dinf_flow_accum

This tool is used to generate a flow accumulation grid (i.e. contributing area) using the D-infinity algorithm (Tarboton, 1997). This algorithm is an examples of a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed to one or two downslope neighbour, i.e. flow divergence is permitted. The user must specify the name of the input digital elevation model or D-infinity pointer raster (input). If an input DEM is specified, the DEM should have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using the breach_depressions_least_cost or fill_depressions tool.

In addition to the input DEM/pointer raster name, the user must specify the output type (out_type). The output flow-accumulation can be 1) specific catchment area (SCA), which is the upslope contributing area divided by the contour length (taken as the grid resolution), 2) total catchment area in square-metres, or 3) the number of upslope grid cells. The user must also specify whether the output flow-accumulation grid should be log-tranformed, i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated area. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation (log) provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index, or relative stream power index.

Grid cells possessing the NoData value in the input DEM/pointer raster are assigned the NoData value in the output flow-accumulation image. The output raster is of the float data type and continuous data scale.

Reference

Tarboton, D. G. (1997). A new method for the determination of flow directions and upslope areas in grid digital elevation models. Water resources research, 33(2), 309-319.

See Also

DInfPointer, D8FlowAccumulation, <a href="tool_help.md#quinn_flow_accumulation">quinn_flow_accumulation</a>, <a href="tool_help.md#qin_flow_accumulation">qin_flow_accumulation</a>, FD8FlowAccumulation, MDInfFlowAccumulation`, rho8_pointer, breach_depressions_least_cost, fill_depressions

Function Signature

def dinf_flow_accum(self, dem: Raster, out_type: str = "sca", convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False, input_is_pointer: bool = False) -> Raster: ...

dinf_mass_flux

This tool can be used to perform a mass flux calculation using DEM-based surface flow-routing techniques. For example, it could be used to model the distribution of sediment or phosphorous within a catchment. Flow-routing is based on a D-Infinity flow pointer derived from an input DEM (dem). The user must also specify the names of loading (loading), efficiency (efficiency), and absorption (absorption) rasters, as well as the output raster. Mass Flux operates very much like a flow-accumulation operation except that rather than accumulating catchment areas the algorithm routes a quantity of mass, the spatial distribution of which is specified within the loading image. The efficiency and absorption rasters represent spatial distributions of losses to the accumulation process, the difference being that the efficiency raster is a proportional loss (e.g. only 50% of material within a particular grid cell will be directed downslope) and the absorption raster is an loss specified as a quantity in the same units as the loading image. The efficiency image can range from 0 to 1, or alternatively, can be expressed as a percentage. The equation for determining the mass sent from one grid cell to a neighbouring grid cell is:

Outflowing Mass = (Loading - Absorption + Inflowing Mass) × Efficiency

This tool assumes that each of the three input rasters have the same number of rows and columns and that any NoData cells present are the same among each of the inputs.

See Also

d8_mass_flux

Function Signature

def dinf_mass_flux(self, dem: Raster, loading: Raster, efficiency: Raster, absorption: Raster) -> Raster: ...

dinf_pointer

This tool is used to generate a flow pointer grid (i.e. flow direction) using the D-infinity (Tarboton, 1997) algorithm. Dinf is a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed one or two downslope neighbours, i.e. flow divergence is permitted. The user must specify the name of a digital elevation model (DEM; dem) that has been hydrologically corrected to remove all spurious depressions and flat areas (breach_depressions_least_cost, fill_depressions). DEM pre-processing is usually achieved using the breach_depressions_least_cost or fill_depressions tool1. Flow directions are specified in the output flow-pointer grid (output) as azimuth degrees measured from north, i.e. any value between 0 and 360 degrees is possible. A pointer value of -1 is used to designate a grid cell with no flow-pointer. This occurs when a grid cell has no downslope neighbour, i.e. a pit cell or topographic depression. Like aspect grids, Dinf flow-pointer grids are best visualized using a circular greyscale palette.

Grid cells possessing the NoData value in the input DEM are assigned the NoData value in the output image. The output raster is of the float data type and continuous data scale.

Reference

Tarboton, D. G. (1997). A new method for the determination of flow directions and upslope areas in grid digital elevation models. Water resources research, 33(2), 309-319.

See Also

DInfFlowAccumulation, breach_depressions_least_cost, fill_depressions

Function Signature

def dinf_pointer(self, dem: Raster) -> Raster: ...

direct_decorrelation_stretch

The Direct Decorrelation Stretch (DDS) is a simple type of saturation stretch. The stretch is applied to a colour composite image and is used to improve the saturation, or colourfulness, of the image. The DDS operates by reducing the achromatic (grey) component of a pixel's colour by a scale factor (k), such that the red (r), green (g), and blue (b) components of the output colour are defined as:

rk = r - k min(r, g, b)

gk = g - k min(r, g, b)

bk = b - k min(r, g, b)

The achromatic factor (k) can range between 0 (no effect) and 1 (full saturation stretch), although typical values range from 0.3 to 0.7. A linear stretch is used afterwards to adjust overall image brightness. Liu and Moore (1996) recommend applying a colour balance stretch, such as balance_contrast_enhancement before using the DDS.

Reference

Liu, J.G., and Moore, J. (1996) Direct decorrelation stretch technique for RGB colour composition. International Journal of Remote Sensing, 17:5, 1005-1018.

See Also

create_colour_composite, balance_contrast_enhancement

Function Signature

def direct_decorrelation_stretch(self, image: Raster, achromatic_factor: float = 0.5, clip_percent: float = 1.0) -> Raster: ...

directional_relief

This tool calculates the relief for each grid cell in a digital elevation model (DEM) in a specified direction. Directional relief is an index of the degree to which a DEM grid cell is higher or lower than its surroundings. It is calculated by subtracting the elevation of a DEM grid cell from the average elevation of those cells which lie between it and the edge of the DEM in a specified compass direction. Thus, positive values indicate that a grid cell is lower than the average elevation of the grid cells in a specific direction (i.e. relatively sheltered), whereas a negative directional relief indicates that the grid cell is higher (i.e. relatively exposed). The algorithm is based on a modification of the procedure described by Lapen and Martz (1993). The modifications include: (1) the ability to specify any direction between 0-degrees and 360-degrees (azimuth), and (2) the ability to use a distance-limited search (max_dist), such that the ray-tracing procedure terminates before the DEM edge is reached for longer search paths. The algorithm works by tracing a ray from each grid cell in the direction of interest and evaluating the average elevation along the ray. Linear interpolation is used to estimate the elevation of the surface where a ray does not intersect the DEM grid precisely at one of its nodes. The user must input a DEM raster file (dem) and a hypothetical wind direction. Furthermore, the user is able to constrain the maximum search distance for the ray tracing. If no maximum search distance is specified, each ray will be traced to the edge of the DEM. The units of the output image are the same as the input DEM.

Ray-tracing is a highly computationally intensive task and therefore this tool may take considerable time to operate for larger sized DEMs. This tool is parallelized to aid with computational efficiency. NoData valued grid cells in the input image will be assigned NoData values in the output image. The output raster is of the float data type and continuous data scale. Directional relief is best displayed using the blue-white-red bipolar palette to distinguish between the positive and negative values that are present in the output.

Reference

Lapen, D. R., & Martz, L. W. (1993). The measurement of two simple topographic indices of wind sheltering-exposure from raster digital elevation models. Computers & Geosciences, 19(6), 769-779.

See Also

fetch_analysis, horizon_angle, relative_aspect

Function Signature

def directional_relief(self, dem: Raster, azimuth: float = 0.0, max_dist: float = float('inf')) -> Raster: ...

dissolve

This tool can be used to remove the interior, or shared, boundaries within a vector polygon coverage. You can either dissolve all interior boundaries or dissolve those boundaries along polygons with the same value of a user-specified attribute within the vector's attribute table. It may be desirable to use the VectorCleaning tool to correct any topological errors resulting from the slight misalignment of nodes along shared boundaries in the vector coverage before performing the dissolve operation.

See Also

clip, erase, polygonize

Function Signature

def dissolve(self, input: Vector, dissolve_field: str = "", snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

distance_to_outlet

Description

This tool calculates the distance of stream grid cells to the channel network outlet cell for each grid cell belonging to a raster stream network. The user must input a raster containing streams data (streams_raster), where stream grid cells are denoted by all positive non-zero values, and a D8 flow pointer (i.e. flow direction) raster (d8_pointer). The pointer image is used to traverse the stream network and must only be created using the D8 algorithm. Stream cells are designated in the streams image as all grid cells with values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless the zero_background parameter is True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by Whitebox. If the pointer file contains ESRI flow direction values instead, the esri_pointer parameter must be True.

See Also

downslope_distance_to_stream, length_of_upstream_channels

Parameters

d8_pointer (Raster): The D8 pointer (flow direction) raster.

streams_raster (Raster): The raster object containing the streams data.

esri_pointer (bool): Determines whether the d8_pointer raster contains pointer data in the Esri format. Default is False.

zero_background (bool): Determines whether the background value in the output raster are assigned zero (True) or NoData values (False). Default is False.

Returns

Raster: returning value

Function Signature

def distance_to_outlet(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

diversity_filter

This tool assigns each cell in the output grid the number of different values in a moving window centred on each grid cell in the input raster. The input image should contain integer values but floating point data are allowable and will be handled by multiplying pixel values by 1000 and rounding. Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values, e.g. 3, 5, 7, 9... If the kernel filter size is the same in the x and y dimensions, the silent filter flag may be used instead (command-line interface only).

See Also

majority_filter

Function Signature

def diversity_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

downslope_distance_to_stream

This tool can be used to calculate the distance from each grid cell in a raster to the nearest stream cell, measured along the downslope flowpath. The user must specify the name of an input digital elevation model (dem) and streams raster (streams). The DEM must have been pre-processed to remove artifact topographic depressions and flat areas (see breach_depressions_least_cost). The streams raster should have been created using one of the DEM-based stream mapping methods, i.e. contributing area thresholding. Stream cells are designated in this raster as all non-zero values. The output of this tool, along with the elevation_above_stream tool, can be useful for preliminary flood plain mapping when combined with high-accuracy DEM data.

By default, this tool calculates flow-path using the D8 flow algorithm. However, the user may specify (dinf) that the tool should use the D-infinity algorithm instead.

See Also

elevation_above_stream, distance_to_outlet

Function Signature

def downslope_distance_to_stream(self, dem: Raster, streams: Raster, use_dinf: bool = False) -> Raster: ...

downslope_flowpath_length

This tool can be used to calculate the downslope flowpath length from each grid cell in a raster to an outlet cell either at the edge of the grid or at the outlet point of a watershed. The user must specify the name of a flow pointer grid (d8_pntr) derived using the D8 flow algorithm (d8_pointer). This grid should be derived from a digital elevation model (DEM) that has been pre-processed to remove artifact topographic depressions and flat areas (breach_depressions_least_cost, fill_depressions). The user may also optionally provide watershed (watersheds) and weights (weights) images. The optional watershed image can be used to define one or more irregular-shaped watershed boundaries. Flowpath lengths are measured within each watershed in the watershed image (each defined by a unique identifying number) as the flowpath length to the watershed's outlet cell.

The optional weight image is multiplied by the flow-length through each grid cell. This can be useful when there is a need to convert the units of the output image. For example, the default unit of flowpath lengths is the same as the input image(s). Thus, if the input image has X-Y coordinates measured in metres, the output image will likely contain very large values. A weight image containing a value of 0.001 for each grid cell will effectively convert the output flowpath lengths into kilometres. The weight image can also be used to convert the flowpath distances into travel times by multiplying the flow distance through a grid cell by the average velocity.

NoData valued grid cells in any of the input images will be assigned NoData values in the output image. The output raster is of the float data type and continuous data scale.

See Also

d8_pointer, elevation_above_stream, breach_depressions_least_cost, fill_depressions, watershed

Function Signature

def downslope_flowpath_length(self, d8_pointer: Raster, watersheds: Raster, weights: Raster, esri_pntr: bool = False) -> Raster: ...

downslope_index

This tool can be used to calculate the downslope index described by Hjerdt et al. (2004). The downslope index is a measure of the slope gradient between a grid cell and some downslope location (along the flowpath passing through the upslope grid cell) that represents a specified vertical drop (i.e. a potential head drop). The index has been shown to be useful for hydrological, geomorphological, and biogeochemical applications.

The user must input a digital elevaton model (DEM) raster. This DEM should be have been pre-processed to remove artifact topographic depressions and flat areas. The user must also specify the head potential drop (d), and the output type. The output type can be either 'tangent', 'degrees', 'radians', or 'distance'. If 'distance' is selected as the output type, the output grid actually represents the downslope flowpath length required to drop d meters from each grid cell. Linear interpolation is used when the specified drop value is encountered between two adjacent grid cells along a flowpath traverse.

Notice that this algorithm is affected by edge contamination. That is, for some grid cells, the edge of the grid will be encountered along a flowpath traverse before the specified vertical drop occurs. In these cases, the value of the downslope index is approximated by replacing d with the actual elevation drop observed along the flowpath. To avoid this problem, the entire watershed containing an area of interest should be contained in the DEM.

Grid cells containing NoData values in any of the input images are assigned the NoData value in the output raster. The output raster is of the float data type and continuous data scale.

Reference

Hjerdt, K.N., McDonnell, J.J., Seibert, J. Rodhe, A. (2004) A new topographic index to quantify downslope controls on local drainage, Water Resources Research, 40, W05602, doi:10.1029/2004WR003130.

Function Signature

def downslope_index(self, dem: Raster, vertical_drop: float, output_type: str = "tangent") -> Raster: ...

edge_contamination

This tool identifs grid cells in a DEM for which the upslope area extends beyond the raster data extent, so-called 'edge-contamined cells'. If a significant number of edge contaminated cells intersect with your area of interest, it is likely that any estimate of upslope area (i.e. flow accumulation) will be under-estimated.

The user must specify the name (dem) of the input digital elevation model (DEM) and the output file (output). The DEM must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost (also breach_depressions_least_cost) or fill_depressions tool.

Additionally, the user must specify the type of flow algorithm used for the analysis (-flow_type), which must be one of 'd8', 'mfd', or 'dinf', based on each of the D8FlowAccumulation, FD8FlowAccumulation, DInfFlowAccumulation methods respectively.

See Also

D8FlowAccumulation, FD8FlowAccumulation, DInfFlowAccumulation

Function Signature

def edge_contamination(self, dem: Raster, flow_type: str = "mfd", z_factor: float = -1.0) -> Raster: ...

edge_density

This tool calculates the density of edges, or breaks-in-slope within an input digital elevation model (DEM). A break-in-slope occurs between two neighbouring grid cells if the angular difference between their normal vectors is greater than a user-specified threshold value (norm_diff). edge_density calculates the proportion of edge cells within the neighbouring window, of square filter dimension filter, surrounding each grid cell. Therefore, EdgeDensity is a measure of how complex the topographic surface is within a local neighbourhood. It is therefore a measure of topographic texture. It will take a value near 0.0 for smooth sites and 1.0 in areas of high surface roughness or complex topography.

The distribution of edge_density is highly dependent upon the value of the norm_diff used in the calculation. This threshold may require experimentation to find an appropriate value and is likely dependent upon the topography and source data. Nonetheless, experience has shown that edge_density provides one of the best measures of surface texture of any of the available roughness tools.

See Also

circular_variance_of_aspect, multiscale_roughness, surface_area_ratio, ruggedness_index

Function Signature

def edge_density(self, dem: Raster, filter_size: int = 11, normal_diff_threshold: float = 5.0, z_factor: float = 1.0) -> Raster: ...

edge_preserving_mean_filter

This tool performs a type of edge-preserving mean filter operation on an input image (input). The filter, a type of low-pass filter, can be used to emphasize the longer-range variability in an image, effectively acting to smooth the image and to reduce noise in the image. The algorithm calculates the average value in a moving window centred on each grid cell, including in the averaging only the set of neighbouring values for which the absolute value difference with the centre value is less than a specified threshold value (threshold). It is, therefore, similar to the bilateral_filter, except all neighbours within the threshold difference are equally weighted and neighbour distance is not accounted for. Filter kernels are always square, and filter size, is specified using the filter parameter. This dimensions should be odd, positive integer values, e.g. 3, 5, 7, 9...

This tool works with both greyscale and red-green-blue (RGB) input images. RGB images are decomposed into intensity-hue-saturation (IHS) and the filter is applied to the intensity channel. If an RGB image is input, the threshold value must be in the range 0.0-1.0 (more likely less than 0.15), where a value of 1.0 would result in an ordinary mean filter (mean_filter). NoData values in the input image are ignored during filtering.

See Also

mean_filter, bilateral_filter, edge_preserving_mean_filter, gaussian_filter, median_filter, rgb_to_ihs

Function Signature

def edge_preserving_mean_filter(self, raster: Raster, filter_size: int = 11, threshold: float = 15.0) -> Raster: ...

edge_proportion

This tool will measure the edge proportion, i.e. the proportion of grid cells in a patch that are located along the patch's boundary, for an input raster image (input). Edge proportion is an indicator of polygon shape complexity and elongation. The user must specify the name of the output raster file (output), which will be raster layer containing the input features assigned the edge proportion. The user may also optionally choose to output text data for easy input to a spreadsheet or database.

Objects in the input raster are designated by their unique identifiers. Identifier values must be positive, non-zero whole numbers.

See Also

shape_complexity_index_raster, linearity_index, elongation_ratio

Function Signature

def edge_proportion(self, raster: Raster) -> Tuple[Raster, str]: ...

elev_relative_to_min_max

This tool can be used to express the elevation of a grid cell in a digital elevation model (DEM) as a percentage of the relief between the DEM minimum and maximum values. As such, it provides a basic measure of relative topographic position.

See Also

elev_relative_to_watershed_min_max, elevation_above_stream, ElevAbovePit

Function Signature

def elev_relative_to_min_max(self, dem: Raster) -> Raster: ...

elev_relative_to_watershed_min_max

This tool can be used to express the elevation of a grid cell in a digital elevation model (DEM) as a percentage of the relief between the watershed minimum and maximum values. As such, it provides a basic measure of relative topographic position. The user must input a DEM (dem) and watersheds (watersheds) raster files.

See Also

elev_relative_to_min_max, elevation_above_stream, ElevAbovePit

Function Signature

def elev_relative_to_watershed_min_max(self, dem: Raster, watersheds: Raster) -> Raster: ...

elevation_above_pit

This tool will calculate the elevation of each grid cell in a digital elevation model (DEM) above the nearest downslope pit cell or grid edge cell, depending on which is encountered first during the flow-path traverse. The resulting image is therefore a measure of relative landscape position. The user must input a D8 flow pointer grid and a DEM file. The flow pointer grid must be derived using the D8 flow algorithm.

See Also

elevation_above_stream

Function Signature

def elevation_above_pit(self, dem: Raster) -> Raster: ...

elevation_above_stream

This tool can be used to calculate the elevation of each grid cell in a raster above the nearest stream cell, measured along the downslope flowpath. This terrain index, a measure of relative topographic position, is essentially equivalent to the 'height above drainage' (HAND), as described by Renno et al. (2008). The user must specify the name of an input digital elevation model (dem) and streams raster (streams). The DEM must have been pre-processed to remove artifact topographic depressions and flat areas (see breach_depressions_least_cost). The streams raster should have been created using one of the DEM-based stream mapping methods, i.e. contributing area thresholding. Stream cells are designated in this raster as all non-zero values. The output of this tool, along with the downslope_distance_to_stream tool, can be useful for preliminary flood plain mapping when combined with high-accuracy DEM data.

The difference between elevation_above_stream and elevation_above_stream_euclidean is that the former calculates distances along drainage flow-paths while the latter calculates straight-line distances to streams channels.

Reference

Renno, C. D., Nobre, A. D., Cuartas, L. A., Soares, J. V., Hodnett, M. G., Tomasella, J., & Waterloo, M. J. (2008). HAND, a new terrain descriptor using SRTM-DEM: Mapping terra-firme rainforest environments in Amazonia. Remote Sensing of Environment, 112(9), 3469-3481.

See Also

elevation_above_stream_euclidean, downslope_distance_to_stream, ElevAbovePit, breach_depressions_least_cost

Function Signature

def elevation_above_stream(self, dem: Raster, streams: Raster) -> Raster: ...

elevation_above_stream_euclidean

This tool can be used to calculate the elevation of each grid cell in a raster above the nearest stream cell, measured along the straight-line distance. This terrain index, a measure of relative topographic position, is related to the 'height above drainage' (HAND), as described by Renno et al. (2008). HAND is generally estimated with distances measured along drainage flow-paths, which can be calculated using the elevation_above_stream tool. The user must specify the name of an input digital elevation model (dem) and streams raster (streams). Stream cells are designated in this raster as all non-zero values. The output of this tool, along with the downslope_distance_to_stream tool, can be useful for preliminary flood plain mapping when combined with high-accuracy DEM data.

The difference between elevation_above_stream and elevation_above_stream_euclidean is that the former calculates distances along drainage flow-paths while the latter calculates straight-line distances to streams channels.

Reference

Renno, C. D., Nobre, A. D., Cuartas, L. A., Soares, J. V., Hodnett, M. G., Tomasella, J., & Waterloo, M. J. (2008). HAND, a new terrain descriptor using SRTM-DEM: Mapping terra-firme rainforest environments in Amazonia. Remote Sensing of Environment, 112(9), 3469-3481.

See Also

elevation_above_stream, downslope_distance_to_stream, ElevAbovePit

Function Signature

def elevation_above_stream_euclidean(self, dem: Raster, streams: Raster) -> Raster: ...

elevation_percentile

Elevation percentile (EP) is a measure of local topographic position (LTP). It expresses the vertical position for a digital elevation model (DEM) grid cell (z0) as the percentile of the elevation distribution within the filter window, such that:

EP = counti∈C(zi > z0) x (100 / nC)

where z0 is the elevation of the window's center grid cell, zi is the elevation of cell i contained within the neighboring set C, and nC is the number of grid cells contained within the window.

EP is unsigned and expressed as a percentage, bound between 0% and 100%. Quantile-based estimates (e.g., the median and interquartile range) are often used in nonparametric statistics to provide data variability estimates without assuming the distribution is normal. Thus, EP is largely unaffected by irregularly shaped elevation frequency distributions or by outliers in the DEM, resulting in a highly robust metric of LTP. In fact, elevation distributions within small to medium sized neighborhoods often exhibit skewed, multimodal, and non-Gaussian distributions, where the occurrence of elevation errors can often result in distribution outliers. Thus, based on these statistical characteristics, EP is considered one of the most robust representation of LTP.

The algorithm implemented by this tool uses the relatively efficient running-histogram filtering algorithm of Huang et al. (1979). Because most DEMs contain floating point data, elevation values must be rounded to be binned. The sig_digits parameter is used to determine the level of precision preserved during this binning process. The algorithm is parallelized to further aid with computational efficiency.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

References

Newman, D. R., Lindsay, J. B., and Cockburn, J. M. H. (2018). Evaluating metrics of local topographic position for multiscale geomorphometric analysis. Geomorphology, 312, 40-50.

Huang, T., Yang, G.J.T.G.Y. and Tang, G., 1979. A fast two-dimensional median filtering algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(1), pp.13-18.

See Also

DevFromMeanElev, DiffFromMeanElev

Function Signature

def elevation_percentile(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

eliminate_coincident_points

This tool can be used to remove any coincident, or nearly coincident, points from a vector points file. The user must specify the name of the input file, which must be of a POINTS VectorGeometryType, the output file name, and the tolerance distance. All points that are within the specified tolerance distance will be eliminated from the output file. A tolerance distance of 0.0 indicates that points must be exactly coincident to be removed.

See Also

LidarRemoveDuplicates

Function Signature

def eliminate_coincident_points(self, input: Vector, tolerance_dist: float) -> Vector: ...

elongation_ratio

This tool can be used to calculate the elongation ratio for vector polygons. The elongation ratio values calculated for each vector polygon feature will be placed in the accompanying database file (.dbf) as an elongation field (ELONGATION).

The elongation ratio (E) is:

E = 1 - S / L

Where S is the short-axis length, and L is the long-axis length. Axes lengths are determined by estimating the minimum bounding box.

The elongation ratio provides similar information as the Linearity Index. The ratio is not an adequate measure of overall polygon narrowness, because a highly sinuous but narrow polygon will have a low linearity (elongation) owing to the compact nature of these polygon.

Function Signature

def elongation_ratio(self, input: Vector) -> Vector: ...

embankment_mapping

This tool can be used to map and/or remove road embankments from an input fine-resolution digital elevation model (dem). Fine-resolution LiDAR DEMs can represent surface features such as road and railway embankments with high fidelity. However, transportation embankments are problematic for several environmental modelling applications, including soil an vegetation distribution mapping, where the pre-embankment topography is the contolling factor. The algorithm utilizes repositioned (search_dist) transportation network cells, derived from rasterizing a transportation vector (road_vec), as seed points in a region-growing operation. The embankment region grows based on derived morphometric parameters, including road surface width (min_road_width), embankment width (typical_width and max_width), embankment height (max_height), and absolute slope (spillout_slope). The tool can be run in two modes. By default the tool will simply map embankment cells, with a Boolean output raster. If, however, the remove_embankments flag is specified, the tool will instead output a DEM for which the mapped embankment grid cells have been excluded and new surfaces have been interpolated based on the surrounding elevation values (see below).

Hillshade from original DEM:

Hillshade from embankment-removed DEM:

References

Van Nieuwenhuizen, N, Lindsay, JB, DeVries, B. 2021. Automated mapping of transportation embankments in fine-resolution LiDAR DEMs. Remote Sensing. 13(7), 1308; https://doi.org/10.3390/rs13071308

See Also:

remove_off_terrain_objects, smooth_vegetation_residual

Function Signature

def embankment_mapping(self, dem: Raster, roads_vector: Vector, search_dist: float = 2.5, min_road_width: float = 6.0, typical_embankment_width: float = 30.0, typical_embankment_max_height: float = 2.0, embankment_max_width: float = 60.0, max_upwards_increment: float = 0.05, spillout_slope: float = 4.0, remove_embankments: bool = False) -> Tuple[Raster, Union[Raster, None]]: ...

emboss_filter

This tool can be used to perform one of eight 3x3 emboss filters on a raster image. Like the sobel_filter and prewitt_filter, the emboss_filter is often applied in edge-detection applications. While these other two common edge-detection filters approximate the slope magnitude of the local neighbourhood surrounding each grid cell, the emboss_filter can be used to estimate the directional slope. The kernel weights for each of the eight available filters are as follows:

North (n)

...
0-10
000
010

Northeast (ne)

...
00-1
000
-100

East (e)

...
000
10-1
000

Southeast (se)

...
100
000
00-1

South (s)

...
010
000
0-10

Southwest (sw)

...
001
000
-100

West (w)

...
000
-101
000

Northwest (nw)

...
-100
000
001

The user must specify the direction, options include 'n', 's', 'e', 'w', 'ne', 'se', 'nw', 'sw'. The user may also optionally clip the output image distribution tails by a specified amount (e.g. 1%).

See Also

sobel_filter, prewitt_filter

Function Signature

def emboss_filter(self, raster: Raster, direction: str = "n", clip_amount: float = 0.0) -> Raster: ...

erase

This tool will remove all the features, or parts of features, that overlap with the features of the erase vector file. The erasing operation is one of the most common vector overlay operations in GIS and effectively imposes the boundary of the erase layer on a set of input vector features, or target features.

See Also

clip

Function Signature

def erase(self, input: Vector, erase_layer: Vector) -> Vector: ...

erase_polygon_from_lidar

This tool can be used to isolate, or clip, all of the LiDAR points in a LAS file (input) contained within one or more vector polygon features. The user must specify the name of the input clip file (--polygons), which must be a vector of a Polygon base shape type. The clip file may contain multiple polygon features and polygon hole parts will be respected during clipping, i.e. LiDAR points within polygon holes will be removed from the output LAS file.

Use the erase_polygon_from_lidar tool to perform the complementary operation of removing points from a LAS file that are contained within a set of polygons.

See Also

erase_polygon_from_lidar, filter_lidar, clip, clip_raster_to_polygon

Function Signature

def erase_polygon_from_lidar(self, input: Lidar, polygons: Vector) -> Lidar: ...

erase_polygon_from_raster

This tool can be used to set values an input raster (input) to a NoData background value with a vector erasing polygon (polygons). The input erase polygon file must be a vector of a Polygon base shape type. The erase file may contain multiple polygon features. Polygon hole parts will be respected during clipping, i.e. polygon holes will not be removed from the output raster. Raster grid cells that fall inside of a polygons in the erase file will be assigned the NoData background value in the output file.

See Also

clip_raster_to_polygon

Function Signature

def erase_polygon_from_raster(self, raster: Raster, polygons: Vector) -> Raster: ...

euclidean_allocation

This tool assigns grid cells in the output image the value of the nearest target cell in the input image, measured by the Euclidean distance (i.e. straight-line distance). Thus, euclidean_allocation essentially creates the Voronoi diagram for a set of target cells. Target cells are all non-zero, non-NoData grid cells in the input image. Distances are calculated using the same efficient algorithm (Shih and Wu, 2003) as the euclidean_distance tool.

Reference

Shih FY and Wu Y-T (2004), Fast Euclidean distance transformation in two scans using a 3 x 3 neighborhood, Computer Vision and Image Understanding, 93: 195-205.

See Also

euclidean_distance, voronoi_diagram, cost_allocation

Function Signature

def euclidean_allocation(self, input: Raster) -> Raster: ...

euclidean_distance

This tool will estimate the Euclidean distance (i.e. straight-line distance) between each grid cell and the nearest 'target cell' in the input image. Target cells are all non-zero, non-NoData grid cells. Distance in the output image is measured in the same units as the horizontal units of the input image.

Algorithm Description

The algorithm is based on the highly efficient distance transform of Shih and Wu (2003). It makes four passes of the image; the first pass initializes the output image; the second and third passes calculate the minimum squared Euclidean distance by examining the 3 x 3 neighbourhood surrounding each cell; the last pass takes the square root of cell values, transforming them into true Euclidean distances, and deals with NoData values that may be present. All NoData value grid cells in the input image will contain NoData values in the output image. As such, NoData is not a suitable background value for non-target cells. Background areas should be designated with zero values.

Reference

Shih FY and Wu Y-T (2004), Fast Euclidean distance transformation in two scans using a 3 x 3 neighborhood, Computer Vision and Image Understanding, 93: 195-205.

See Also

euclidean_allocation, cost_distance

Function Signature

def euclidean_distance(self, input: Raster) -> Raster: ...

export_table_to_csv

This tool can be used to export a vector's attribute table to a comma separated values (CSV) file. CSV files stores tabular data (numbers and text) in plain-text form such that each row corresponds to a record and each column to a field. Fields are typically separated by commas within records. The user must specify the name of the vector (and associated attribute file), the name of the output CSV file, and whether or not to include the field names as a header column in the output CSV file.

See Also

merge_table_with_csv

Function Signature

def export_table_to_csv(self, input: Vector, output_csv_file: str, headers: bool = True) -> None: ...

exposure_towards_wind_flux

This tool creates a new raster in which each grid cell is assigned the exposure of the land-surface to a hypothetical wind flux. It can be conceptualized as the angle between a plane orthogonal to the wind and a plane that represents the local topography at a grid cell (Bohner and Antonic, 2007). The user must input a digital elevation model (dem), as well as the dominant wind azimuth (azimuth) and a maximum search distance (max_dist) used to calclate the horizon angle. Notice that the specified azimuth represents a regional average wind direction.

Exposure towards the sloped wind flux essentially combines the relative terrain aspect and the maximum upwind slope (i.e. horizon angle). This terrain attribute accounts for land-surface orientation, relative to the wind, and shadowing effects of distant topographic features but does not account for deflection of the wind by topography. This tool should not be used on very extensive areas over which Earth's curvature must be taken into account. DEMs in projected coordinate systems are preferred.

Algorithm Description:

Exposure is measured based on the equation presented in Antonic and Legovic (1999):

cos(E) = cos(S) sin(H) + sin(S) cos(H) cos(Az - A)

Where, E is angle between a plane defining the local terrain and a plane orthogonal to the wind flux, S is the terrain slope, A is the terrain aspect, Az is the azimuth of the wind flux, and H is the horizon angle of the wind flux, which is zero when only the horizontal component of the wind flux is accounted for.

Exposure images are best displayed using a greyscale or bipolar palette to distinguish between the positive and negative values that are present in the output.

References

Antonić, O., & Legović, T. 1999. Estimating the direction of an unknown air pollution source using a digital elevation model and a sample of deposition. Ecological modelling, 124(1), 85-95.

Böhner, J., & Antonić, O. 2009. Land-surface parameters specific to topo-climatology. Developments in Soil Science, 33, 195-226.

See Also

relative_aspect

Function Signature

def exposure_towards_wind_flux(self, dem: Raster, azimuth: float = 0.0, max_dist: float = float('inf'), z_factor: float = 1.0) -> Raster: ...

extend_vector_lines

This tool can be used to extend vector lines by a specified distance. The user must input the names of the input and output shapefiles, the distance to extend features by, and whether to extend both ends, line starts, or line ends. The input shapefile must be of a POLYLINE base shape type and should be in a projected coordinate system.

Function Signature

def extend_vector_lines(self, input: Vector, distance: float, extend_direction: str = "both") -> Vector: ...

extract_by_attribute

This tool extracts features from an input vector into an output file based on attribute properties. The user must specify the name of the input (input) and output (output) files, along with the filter statement (statement). The conditional statement is a single-line logical condition containing one or more attribute variables contained in the file's attribute table that evaluates to TRUE/FALSE. In addition to the common comparison and logical
operators, i.e. < > <= >= == (EQUAL TO) != (NOT EQUAL TO) || (OR) && (AND), conditional statements may contain a
any valid mathematical operation and the null value.

IdentifierArgument AmountArgument TypesDescription
min>= 1NumericReturns the minimum of the arguments
max>= 1NumericReturns the maximum of the arguments
len1String/TupleReturns the character length of a string, or the amount of elements in a tuple (not recursively)
floor1NumericReturns the largest integer less than or equal to a number
round1NumericReturns the nearest integer to a number. Rounds half-way cases away from 0.0
ceil1NumericReturns the smallest integer greater than or equal to a number
if3Boolean, Any, AnyIf the first argument is true, returns the second argument, otherwise, returns the third
contains2Tuple, any non-tupleReturns true if second argument exists in first tuple argument.
contains_any2Tuple, Tuple of any non-tupleReturns true if one of the values in the second tuple argument exists in first tuple argument.
typeof1Anyreturns "string", "float", "int", "boolean", "tuple", or "empty" depending on the type of the argument
math::is_nan1NumericReturns true if the argument is the floating-point value NaN, false if it is another floating-point value, and throws an error if it is not a number
math::is_finite1NumericReturns true if the argument is a finite floating-point number, false otherwise
math::is_infinite1NumericReturns true if the argument is an infinite floating-point number, false otherwise
math::is_normal1NumericReturns true if the argument is a floating-point number that is neither zero, infinite, subnormal, or NaN, false otherwise
math::ln1NumericReturns the natural logarithm of the number
math::log2Numeric, NumericReturns the logarithm of the number with respect to an arbitrary base
math::log21NumericReturns the base 2 logarithm of the number
math::log101NumericReturns the base 10 logarithm of the number
math::exp1NumericReturns e^(number), (the exponential function)
math::exp21NumericReturns 2^(number)
math::pow2Numeric, NumericRaises a number to the power of the other number
math::cos1NumericComputes the cosine of a number (in radians)
math::acos1NumericComputes the arccosine of a number. The return value is in radians in the range [0, pi] or NaN if the number is outside the range [-1, 1]
math::cosh1NumericHyperbolic cosine function
math::acosh1NumericInverse hyperbolic cosine function
math::sin1NumericComputes the sine of a number (in radians)
math::asin1NumericComputes the arcsine of a number. The return value is in radians in the range [-pi/2, pi/2] or NaN if the number is outside the range [-1, 1]
math::sinh1NumericHyperbolic sine function
math::asinh1NumericInverse hyperbolic sine function
math::tan1NumericComputes the tangent of a number (in radians)
math::atan1NumericComputes the arctangent of a number. The return value is in radians in the range [-pi/2, pi/2]
math::atan22Numeric, NumericComputes the four quadrant arctangent in radians
math::tanh1NumericHyperbolic tangent function
math::atanh1NumericInverse hyperbolic tangent function.
math::sqrt1NumericReturns the square root of a number. Returns NaN for a negative number
math::cbrt1NumericReturns the cube root of a number
math::hypot2NumericCalculates the length of the hypotenuse of a right-angle triangle given legs of length given by the two arguments
math::abs1NumericReturns the absolute value of a number, returning an integer if the argument was an integer, and a float otherwise
str::regex_matches2String, StringReturns true if the first argument matches the regex in the second argument (Requires regex_support feature flag)
str::regex_replace3String, String, StringReturns the first argument with all matches of the regex in the second argument replaced by the third argument (Requires regex_support feature flag)
str::to_lowercase1StringReturns the lower-case version of the string
str::to_uppercase1StringReturns the upper-case version of the string
str::trim1StringStrips whitespace from the start and the end of the string
str::from>= 0AnyReturns passed value as string
bitand2IntComputes the bitwise and of the given integers
bitor2IntComputes the bitwise or of the given integers
bitxor2IntComputes the bitwise xor of the given integers
bitnot1IntComputes the bitwise not of the given integer
shl2IntComputes the given integer bitwise shifted left by the other given integer
shr2IntComputes the given integer bitwise shifted right by the other given integer
random0EmptyReturn a random float between 0 and 1. Requires the rand feature flag.
pi0EmptyReturn the value of the PI constant.

The following are examples of valid conditional statements:

HEIGHT >= 300.0

CROP == "corn"

(ELEV >= 525.0) && (HGT_AB_GR <= 5.0)

math::ln(CARBON) > 1.0

VALUE == null

Function Signature

def extract_by_attribute(self, input: Vector, statement: str) -> Vector: ...

extract_nodes

This tool converts vector lines or polygons into vertex points. The user must specify the name of the input vector, which must be of a polyline or polygon base shape type, and the name of the output point-type vector.

Function Signature

def extract_nodes(self, input: Vector) -> Vector: ...

extract_raster_values_at_points

This tool can be used to extract the values of one or more rasters (inputs) at the sites of a set of vector points. By default, the data is output to the attribute table of the input points (points) vector; however, if the out_text parameter is specified, the tool will additionally output point values as text data to standard output (stdout). Attribute fields will be added to the table of the points file, with field names, VALUE1, VALUE2, VALUE3, etc. each corresponding to the order of input rasters.

If you need to plot a chart of values from a raster stack at a set of points, the image_stack_profile may be more suitable for this application.

See Also

image_stack_profile, find_lowest_or_highest_points

Function Signature

def extract_raster_values_at_points(self, rasters: List[Raster], points: Vector) -> Tuple[Vector, str]: ...

extract_streams

Description

This tool can be used to extract, or map, the likely stream cells from an input flow-accumulation image (flow_accumulation). The algorithm applies a threshold to the input flow accumulation image such that streams are considered to be all grid cells with accumulation values greater than the specified threshold (threshold). As such, this threshold represents the minimum area (area is used here as a surrogate for discharge) required to initiate and maintain a channel. Smaller threshold values result in more extensive stream networks and vice versa. Unfortunately there is very little guidance regarding an appropriate method for determining the channel initiation area threshold in practice. As such, it is frequently determined either by examining map or imagery data, using field work, or by experimentation until a suitable or desirable channel network is identified. Notice that the threshold value will be unique for each landscape and dataset (including source and grid resolution), further complicating its a priori determination. There is also evidence that in some landscape the threshold is a combined upslope area-slope function. Generally, a lower threshold is appropriate in humid climates and a higher threshold is appropriate in areas underlain by more resistant bedrock. Climate and bedrock resistance are two factors related to drainage density, i.e. the extent to which a landscape is dissected by drainage channels.

The background value of the output raster will be the NoData value unless zero_background is set to True.

See Also

extract_valleys

Parameters

flow_accumulation (Raster): The input flow accumulation Raster object.

threshold (float): The minimum accumulation value required to be part of a stream channel. Default is 0.0, but should be set higher.

zero_background (bool): Whether the output raster uses 0.0 for non-channel cells (True) or NoData (False). Default is False.

Returns:

Raster

Function Signature

def extract_streams(self, flow_accumulation: Raster, threshold: float = 0.0, zero_background: bool = False) -> Raster: ...

extract_valleys

This tool can be used to extract channel networks from an input digital elevation models (dem) using one of three techniques that are based on local topography alone.

The Lindsay (2006) 'lower-quartile' method (variant='LQ') algorithm is a type of 'valley recognition' method. Other channel mapping methods, such as the Johnston and Rosenfeld (1975) algorithm, experience problems because channel profiles are not always 'v'-shaped, nor are they always apparent in small 3 x 3 windows. The lower-quartile method was developed as an alternative and more flexible valley recognition channel mapping technique. The lower-quartile method operates by running a filter over the DEM that calculates the percentile value of the centre cell with respect to the distribution of elevations within the filter window. The roving window is circular, the diameter of which should reflect the topographic variation of the area (e.g. the channel width or average hillslope length). If this variant is selected, the user must specify the filter_size parameter, in pixels, and this value should be an odd number (e.g. 3, 5, 7, etc.). The appropriateness of the selected window diameter will depend on the grid resolution relative to the scale of topographic features. Cells that are within the lower quartile of the distribution of elevations of their neighbourhood are flagged. Thus, the algorithm identifies grid cells that are in relatively low topographic positions at a local scale. This approach to channel mapping is only appropriate in fluvial landscapes. In regions containing numerous lakes and wetlands, the algorithm will pick out the edges of features.

The Johnston and Rosenfeld (1975) algorithm (variant='JandR') is a type of 'valley recognition' method and operates as follows: channel cells are flagged in a 3 x 3 window if the north and south neighbours are higher than the centre grid cell or if the east and west neighbours meet this same criterion. The group of cells that are flagged after one pass of the roving window constituted the drainage network. This method is best applied to DEMs that are relatively smooth and do not exhibit high levels of short-range roughness. As such, it may be desirable to use a smoothing filter before applying this tool. The feature_preserving_smoothing is a good option for removing DEM roughness while preserving the topographic information contain in breaks-in-slope (i.e. edges).

The Peucker and Douglas (1975) algorithm (variant='PandD') is one of the simplest and earliest algorithms for topography-based network extraction. Their 'valley recognition' method operates by passing a 2 x 2 roving window over a DEM and flagging the highest grid cell in each group of four. Once the window has passed over the entire DEM, channel grid cells are left unflagged. This method is also best applied to DEMs that are relatively smooth and do not exhibit high levels of short-range roughness. Pre-processing the DEM with the feature_preserving_smoothing tool may also be useful when applying this method.

Each of these methods of extracting valley networks result in line networks that can be wider than a single grid cell. As such, it is often desirable to thin the resulting network using a line-thinning algorithm. The option to perform line-thinning is provided by the tool as a post-processing step (line_thin=True).

References

Johnston, E. G., & Rosenfeld, A. (1975). Digital detection of pits, peaks, ridges, and ravines. IEEE Transactions on Systems, Man, and Cybernetics, (4), 472-480.

Lindsay, J. B. (2006). Sensitivity of channel mapping techniques to uncertainty in digital elevation data. International Journal of Geographical Information Science, 20(6), 669-692.

Peucker, T. K., & Douglas, D. H. (1975). Detection of surface-specific points by local parallel processing of discrete terrain elevation data. Computer Graphics and image processing, 4(4), 375-387.

See Also

feature_preserving_smoothing

Function Signature

def extract_valleys(self, dem: Raster, variant: str = "lq", line_thin: bool = False, filter_size: int = 5) -> Raster: ...

farthest_channel_head

Description

This tool calculates the upstream distance to the farthest stream head for each grid cell belonging to a raster stream network. The user must input a raster containing streams data (streams), where stream grid cells are denoted by all positive non-zero values, and a D8 flow pointer (i.e. flow direction) raster (d8_pointer). The pointer image is used to traverse the stream network and must only be created using the D8 algorithm. Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the user should specify esri_pntr=True.

See Also

length_of_upstream_channels, find_main_stem

Parameters

d8_pointer (Raster): The D8 pointer (flow direction) raster.

streams_raster (Raster): The raster object containing the streams data.

esri_pointer (bool): Determines whether the d8_pointer raster contains pointer data in the Esri format. Default is False.

zero_background (bool): Determines whether the background value in the output raster are assigned zero (True) or NoData values (False). Default is False.

Returns

Raster: returning value

Function Signature

def farthest_channel_head(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

fast_almost_gaussian_filter

The tool is somewhat modified from Dr. Kovesi's original Matlab code in that it works with both greyscale and RGB images (decomposes to HSI and uses the intensity data) and it handles the case of rasters that contain NoData values. This adds complexity to the original 20 additions and 5 multiplications assertion of the original paper.

Also note, for small values of sigma (< 1.8), you should probably just use the regular GaussianFilter tool.

Reference

P. Kovesi 2010 Fast Almost-Gaussian Filtering, Digital Image Computing: Techniques and Applications (DICTA), 2010 International Conference on.

Function Signature

def fast_almost_gaussian_filter(self, raster: Raster, sigma: float = 1.8) -> Raster: ...

fd8_flow_accum

This tool is used to generate a flow accumulation grid (i.e. contributing area) using the FD8 algorithm (Freeman, 1991), sometimes referred to as FMFD. This algorithm is an examples of a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed to each downslope neighbour, i.e. flow divergence is permitted. The user must specify the name (dem) of the input digital elevation model (DEM). The DEM must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost (also breach_depressions_least_cost) or fill_depressions tool. A value must also be specified for the exponent parameter (exponent), a number that controls the degree of dispersion in the resulting flow-accumulation grid. A lower value yields greater apparent flow dispersion across divergent hillslopes. Some experimentation suggests that a value of 1.1 is appropriate (Freeman, 1991), although this is almost certainly landscape-dependent.

In addition to the input DEM, the user must specify the output type (out_type). The output flow-accumulation can be 1) cells (i.e. the number of inflowing grid cells), catchment area (i.e. the upslope area), or specific contributing area (i.e. the catchment area divided by the flow width. The default value is cells. The user must also specify whether the output flow-accumulation grid should be log-tranformed (log), i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated flow value. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index, or relative stream power index.

The non-dispersive threshold (threshold) is a flow-accumulation value (measured in upslope grid cells, which is directly proportional to area) above which flow dispersion is no longer permitted. Grid cells with flow-accumulation values above this threshold will have their flow routed in a manner that is similar to the D8 single-flow-direction algorithm, directing all flow towards the steepest downslope neighbour. This is usually done under the assumption that flow dispersion, whilst appropriate on hillslope areas, is not realistic once flow becomes channelized.

Reference

Freeman, T. G. (1991). Calculating catchment area with divergent flow based on a regular grid. Computers and Geosciences, 17(3), 413-422.

See Also

D8FlowAccumulation, quinn_flow_accumulation, qin_flow_accumulation, DInfFlowAccumulation, MDInfFlowAccumulation, rho8_pointer

Function Signature

def fd8_flow_accum(self, dem: Raster, out_type: str = "sca", exponent: float = 1.1, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

fd8_pointer

This tool is used to generate a flow pointer grid (i.e. flow direction) using the FD8 (Freeman, 1991) algorithm. FD8 is a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed one or more downslope neighbours, i.e. flow divergence is permitted. The user must specify the name of a digital elevation model (DEM; dem) that has been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using the breach_depressions_least_cost or fill_depressions tools.

By default, D8 flow pointers use the following clockwise, base-2 numeric index convention:

...
641281
3202
1684

In the case of the FD8 algorithm, some portion of the flow entering a grid cell will be sent to each downslope neighbour. Thus, the FD8 flow-pointer value is the sum of each of the individual pointers for all downslope neighbours. For example, if a grid cell has downslope neighbours to the northeast, east, and south the corresponding FD8 flow-pointer value will be 1 + 2 + 8 = 11. Using the naming convention above, this is the only combination of flow-pointers that will result in the combined value of 11. Using the base-2 naming convention allows for the storage of complex combinations of flow-points using a single numeric value, which is the reason for using this somewhat odd convention.

Reference

Freeman, T. G. (1991). Calculating catchment area with divergent flow based on a regular grid. Computers and Geosciences, 17(3), 413-422.

See Also

FD8FlowAccumulation, d8_pointer, DInfPointer, breach_depressions_least_cost, fill_depressions

Function Signature

def fd8_pointer(self, dem: Raster) -> Raster: ...

feature_preserving_smoothing

Description

This tool implements a highly modified form of the DEM de-noising algorithm described by Sun et al. (2007). It is very effective at removing surface roughness from digital elevation models (DEMs), without significantly altering breaks-in-slope. As such, this tool should be used for smoothing DEMs rather than either smoothing with low-pass filters (e.g. mean, median, Gaussian filters) or grid size coarsening by resampling. The algorithm works by 1) calculating the surface normal 3D vector of each grid cell in the DEM, 2) smoothing the normal vector field using a filtering scheme that applies more weight to neighbours with lower angular difference in surface normal vectors, and 3) uses the smoothed normal vector field to update the elevations in the input DEM.

Sun et al.'s (2007) original method was intended to work on input point clouds and fitted triangular irregular networks (TINs). The algorithm has been modified to work with input raster DEMs instead. In so doing, this algorithm calculates surface normal vectors from the planes fitted to 3 x 3 neighbourhoods surrounding each grid cell, rather than the triangular facet. The normal vector field smoothing and elevation updating procedures are also based on raster filtering operations. These modifications make this tool more efficient than Sun's original method, but will also result in a slightly different output than what would be achieved with Sun's method.

The user must specify the values of three key parameters, including the filter size (filter), the normal difference threshold (norm_diff), and the number of iterations (num_iter). Lindsay et al. (2019) found that the degree of smoothing was less impacted by the filter size than it was either the normal difference threshold and the number of iterations. A filter size of 11, the default value, tends to work well in many cases. To increase the level of smoothing applied to the DEM, consider increasing the normal difference threshold, i.e. the angular difference in normal vectors between the center cell of a filter window and a neighbouring cell. This parameter determines which neighbouring values are included in a filtering operation and higher values will result in a greater number of neighbouring cells included, and therefore smoother surfaces. Similarly, increasing the number of iterations from the default value of 3 to upwards of 5-10 will result in significantly greater smoothing.

Before smoothing treatment:

After smoothing treatment with FPS:

For a video tutorial on how to use the feature_preserving_smoothing tool, please see this YouTube video.

Reference

Lindsay JB, Francioni A, Cockburn JMH. 2019. LiDAR DEM smoothing and the preservation of drainage features. Remote Sensing, 11(16), 1926; DOI: 10.3390/rs11161926.

Sun, X., Rosin, P., Martin, R., & Langbein, F. (2007). Fast and effective feature-preserving mesh denoising. IEEE Transactions on Visualization & Computer Graphics, (5), 925-938.

Parameters

dem (Raster): The input digital elevation model (DEM)

filter_size (int): The filter size used for smoothing. Default is 11.

normal_diff_threshold (float): The maximum allowable difference in the angle of the normals between two grid cells on the same facet. Default is 8.0.

iterations (int): The number of iterations used during smoothing. Default is 3.

max_elevation_diff (float): The maximum allowable vertical distance that a cell's elevation is allowed to be changed by

z_factor (float): Used to convert elevation units so that they match the horizontal units. Unless the two units differ, this should be set to 1.0. Default is 1.0.

Returns

Raster: return value

Function Signature

def feature_preserving_smoothing(self, dem: Raster, filter_size: int = 11, normal_diff_threshold: float = 8.0, iterations: int = 3, max_elevation_diff: float = float('inf'), z_factor: float = 1.0) -> Raster: ...

fetch_analysis

This tool creates a new raster in which each grid cell is assigned the distance, in meters, to the nearest topographic obstacle in a specified direction. It is a modification of the algorithm described by Lapen and Martz (1993). Unlike the original algorithm, Fetch Analysis is capable of analyzing fetch in any direction from 0-360 degrees. The user must input a digital elevation model (DEM) raster file, a hypothetical wind direction, and a value for the height increment parameter. The algorithm searches each grid cell in a path following the specified wind direction until the following condition is met:

Ztest >= Zcore + DI

where Zcore is the elevation of the grid cell at which fetch is being determined, Ztest is the elevation of the grid cell being tested as a topographic obstacle, D is the distance between the two grid cells in meters, and I is the height increment in m/m. Lapen and Martz (1993) suggest values for I in the range of 0.025 m/m to 0.1 m/m based on their study of snow re-distribution in low-relief agricultural landscapes of the Canadian Prairies. If the directional search does not identify an obstacle grid cell before the edge of the DEM is reached, the distance between the DEM edge and Zcore is entered. Edge distances are assigned negative values to differentiate between these artificially truncated fetch values and those for which a valid topographic obstacle was identified. Notice that linear interpolation is used to estimate the elevation of the surface where a ray (i.e. the search path) does not intersect the DEM grid precisely at one of its nodes.

Ray-tracing is a highly computationally intensive task and therefore this tool may take considerable time to operate for larger sized DEMs. This tool is parallelized to aid with computational efficiency. NoData valued grid cells in the input image will be assigned NoData values in the output image. Fetch Analysis images are best displayed using the blue-white-red bipolar palette to distinguish between the positive and negative values that are present in the output.

Reference

Lapen, D. R., & Martz, L. W. (1993). The measurement of two simple topographic indices of wind sheltering-exposure from raster digital elevation models. Computers & Geosciences, 19(6), 769-779.

See Also

directional_relief, horizon_angle, relative_aspect

Function Signature

def fetch_analysis(self, dem: Raster, azimuth: float = 0.0, height_increment: float = 0.05) -> Raster: ...

fill_burn

Burns streams into a DEM using the FillBurn (Saunders, 1999) method which produces a hydro-enforced DEM. This tool uses the algorithm described in:

Lindsay JB. 2016. The practice of DEM stream burning revisited. Earth Surface Processes and Landforms, 41(5): 658-668. DOI: 10.1002/esp.3888

And:

Saunders, W. 1999. Preparation of DEMs for use in environmental modeling analysis, in: ESRI User Conference. pp. 24-30.

Function Signature

def fill_burn(self, dem: Raster, streams: Vector) -> Raster: ...

fill_depressions

This tool can be used to fill all of the depressions in a digital elevation model (DEM) and to remove the flat areas. This is a common pre-processing step required by many flow-path analysis tools to ensure continuous flow from each grid cell to an outlet located along the grid edge. The fill_depressions algorithm operates by first identifying single-cell pits, that is, interior grid cells with no lower neighbouring cells. Each pit cell is then visited from highest to lowest and a priority region-growing operation is initiated. The area of monotonically increasing elevation, starting from the pit cell and growing based on flood order, is identified. Once a cell, that has not been previously visited and possessing a lower elevation than its discovering neighbour cell, is identified the discovering neighbour is labelled as an outlet (spill point) and the outlet elevation is noted. The algorithm then back-fills the labelled region, raising the elevation in the output DEM (output) to that of the outlet. Once this process is completed for each pit cell (noting that nested pit cells are often solved by prior pits) the flat regions of filled pits are optionally treated (fix_flats) with an applied small slope gradient away from outlets (note, more than one outlet cell may exist for each depression). The user may optionally specify the size of the elevation increment used to solve flats (flat_increment), although it is best to not specify this optional value and to let the algorithm determine the most suitable value itself. The flat-fixing method applies a small gradient away from outlets using another priority region-growing operation (i.e. based on a priority queue operation), where priorities are set by the elevations in the input DEM (input). This in effect ensures a gradient away from outlet cells but also following the natural pre-conditioned topography internal to depression areas. For example, if a large filled area occurs upstream of a damming road-embankment, the filled DEM will possess flow directions that are similar to the un-flooded valley, with flow following the valley bottom. In fact, the above case is better handled using the breach_depressions_least_cost tool, which would simply cut through the road embankment at the likely site of a culvert. However, the flat-fixing method of fill_depressions does mean that this common occurrence in LiDAR DEMs is less problematic.

The breach_depressions_least_cost, while slightly less efficient than either other hydrological preprocessing methods, often provides a lower impact solution to topographic depressions and should be preferred in most applications. In comparison with the breach_depressions_least_cost tool, the depression filling method often provides a less satisfactory, higher impact solution. It is advisable that users try the breach_depressions_least_cost tool to remove depressions from their DEMs before using fill_depressions. Nonetheless, there are applications for which full depression filling using the fill_depressions tool may be preferred.

Note that this tool will not fill in NoData regions within the DEM. It is advisable to remove such regions using the fill_missing_data tool prior to application.

See Also

breach_depressions_least_cost, breach_depressions_least_cost, sink, depth_in_sink, fill_missing_data

Function Signature

def fill_depressions(self, dem: Raster, fix_flats: bool = True, flat_increment: float = float('nan'), max_depth: float = float('inf')) -> Raster: ...

fill_depressions_planchon_and_darboux

This tool can be used to fill all of the depressions in a digital elevation model (DEM) and to remove the flat areas using the Planchon and Darboux (2002) method. This is a common pre-processing step required by many flow-path analysis tools to ensure continuous flow from each grid cell to an outlet located along the grid edge. This tool is currently not the most efficient depression-removal algorithm available in WhiteboxTools; fill_depressions and breach_depressions_least_cost are both more efficient and often produce better, lower-impact results.

The user may optionally specify the size of the elevation increment used to solve flats (flat_increment), although it is best not to specify this optional value and to let the algorithm determine the most suitable value itself.

Reference

Planchon, O. and Darboux, F., 2002. A fast, simple and versatile algorithm to fill the depressions of digital elevation models. Catena, 46(2-3), pp.159-176.

See Also

fill_depressions, breach_depressions_least_cost

Function Signature

def fill_depressions_planchon_and_darboux(self, dem: Raster, fix_flats: bool = True, flat_increment: float = float('nan')) -> Raster: ...

fill_depressions_wang_and_liu

This tool can be used to fill all of the depressions in a digital elevation model (DEM) and to remove the flat areas. This is a common pre-processing step required by many flow-path analysis tools to ensure continuous flow from each grid cell to an outlet located along the grid edge. The fill_depressions_wang_and_liu algorithm is based on the computationally efficient approach of examining each cell based on its spill elevation, starting from the edge cells, and visiting cells from lowest order using a priority queue. As such, it is based on the algorithm first proposed by Wang and Liu (2006). However, it is currently not the most efficient depression-removal algorithm available in WhiteboxTools; fill_depressions and breach_depressions_least_cost are both more efficient and often produce better, lower-impact results.

If the input DEM has gaps, or missing-data holes, that contain NoData values, it is better to use the fill_missing_data tool to repair these gaps. This tool will interpolate values across the gaps and produce a more natural-looking surface than the flat areas that are produced by depression filling. Importantly, the fill_depressions tool algorithm implementation assumes that there are no 'donut hole' NoData gaps within the area of valid data. Any NoData areas along the edge of the grid will simply be ignored and will remain NoData areas in the output image.

The user may optionally specify the size of the elevation increment used to solve flats (flat_increment), although it is best not to specify this optional value and to let the algorithm determine the most suitable value itself.

Reference

Wang, L. and Liu, H. 2006. An efficient method for identifying and filling surface depressions in digital elevation models for hydrologic analysis and modelling. International Journal of Geographical Information Science, 20(2): 193-213.

See Also

fill_depressions, breach_depressions_least_cost, breach_depressions_least_cost, fill_missing_data

Function Signature

def fill_depressions_wang_and_liu(self, dem: Raster, fix_flats: bool = True, flat_increment: float = float('nan')) -> Raster: ...

fill_missing_data

This tool can be used to fill in small gaps in a raster or digital elevation model (DEM). The gaps, or holes, must have recognized NoData values. If gaps do not currently have this characteristic, use the set_nodata_value tool and ensure that the data are stored using a raster format that supports NoData values. All valid, non-NoData values in the input raster will be assigned the same value in the output image.

The algorithm uses an inverse-distance weighted (IDW) scheme based on the valid values on the edge of NoData gaps to estimate gap values. The user must specify the filter size (filter), which determines the size of gap that is filled, and the IDW weight (weight).

The filter size, specified in grid cells, is used to determine how far the algorithm will search for valid, non-NoData values. Therefore, setting a larger filter size allows for the filling of larger gaps in the input raster.

The no_edges flag can be used to exclude NoData values that are connected to the edges of the raster. It is usually the case that irregularly shaped DEMs have large regions of NoData values along the containing raster edges. This flag can be used to exclude these regions from the gap-filling operation, leaving only interior gaps for filling.

See Also

set_nodata_value

Function Signature

def fill_missing_data(self, dem: Raster, filter_size: int = 11, weight: float = 2.0, exclude_edge_nodata: bool = False) -> Raster: ...

fill_pits

This tool can be used to remove pits from a digital elevation model (DEM). Pits are single grid cells with no downslope neighbours. They are important because they impede overland flow-paths. This tool will remove any pits in the input DEM that can be resolved by raising the elevation of the pit such that flow will continue past the pit cell to one of the downslope neighbours. Notice that this tool can be a useful pre-processing technique before running one of the more robust depression breaching (breach_depressions_least_cost) or filling (fill_depressions) techniques, which are designed to remove larger depression features.

See Also

breach_depressions_least_cost, fill_depressions, breach_single_cell_pits

Function Signature

def fill_pits(self, dem: Raster) -> Raster: ...

filter_lidar_classes

This tool can be used to remove points within a LAS LiDAR file that possess certain specified class values. The user must input the names of the input (input) and output (output) LAS files and the class values to be excluded (exclude_cls). Class values are specified by their numerical values, such that:

Classification ValueMeaning
0Created never classified
1Unclassified
2Ground
3Low Vegetation
4Medium Vegetation
5High Vegetation
6Building
7Low Point (noise)
8Reserved
9Water
10Rail
11Road Surface
12Reserved
13Wire – Guard (Shield)
14Wire – Conductor (Phase)
15Transmission Tower
16Wire-structure Connector (e.g. Insulator)
17Bridge Deck
18High noise

Thus, to filter out low and high noise points from a point cloud, specify exclude_cls='7,18'. Class ranges may also be specified, e.g. exclude_cls='3-5,7,18'. Notice that usage of this tool assumes that the LAS file has underwent a comprehensive point classification, which not all point clouds have had. Use the lidar_info tool determine the distribution of various class values in your file.

See Also

lidar_info

Function Signature

def filter_lidar_classes(self, input: Lidar, exclusion_classes: List[int]) -> Lidar: ...

filter_lidar_scan_angles

Function Signature

def filter_lidar_scan_angles(self, in_lidar: Lidar, threshold: int) -> Lidar: ...

filter_raster_features_by_area

This tool takes an input raster (input) containing integer-labelled features, such as the output of the clump tool, and removes all features that are smaller than a user-specified size (threshold), measured in grid cells. The user must specify the replacement value for removed features using the background parameter, which can be either zero or nodata.

See Also

clump

Function Signature

def filter_raster_features_by_area(self, input: Raster, threshold: int, zero_background: bool = False) -> Raster: ...

find_flightline_edge_points

Function Signature

def find_flightline_edge_points(self, in_lidar: Lidar) -> Lidar: ...

find_lowest_or_highest_points

This tool locates the lowest and/or highest cells in a raster and outputs these locations to a vector points file. The user must specify the name of the input raster (input) and the name of the output vector file (output). The user also has the option (out_type) to locate either the lowest value, highest value, or both values. The output vector's attribute table will contain fields for the points XY coordinates and their values.

See Also

extract_raster_values_at_points

Function Signature

def find_lowest_or_highest_points(self, raster: Raster, output_type: str = "lowest") -> Vector: ...

find_main_stem

This tool can be used to identify the main channel in a stream network. The user must input a D8 pointer (flow direction) raster (d8_pointer), and a streams raster (streams_raster). The pointer raster is used to traverse the stream network and should only be created using the d8_pointer tool. By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools:

...
641281
3202
1684

If the pointer file contains ESRI flow direction values instead, you must set esri_pointer=True parameter must be specified.

The streams raster should have been created using one of the DEM-based stream mapping methods, i.e. contributing area thresholding. Stream grid cells are designated in the streams image as all positive, non-zero values. All non-stream cells will be assigned the NoData value in the output image, unless the user sets zero_background=True.

The algorithm operates by traversing each stream and identifying the longest stream-path draining to each outlet. When a confluence is encountered, the traverse follows the branch with the larger distance-to-head.

See Also

d8_pointer

Function Signature

def find_main_stem(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

find_noflow_cells

This tool can be used to find cells with undefined flow, i.e. no valid flow direction, based on the D8 flow direction algorithm (d8_pointer). These cells are therefore either at the bottom of a topographic depression or in the interior of a flat area. In a digital elevation model (DEM) that has been pre-processed to remove all depressions and flat areas (breach_depressions_least_cost), this condition will only occur along the edges of the grid, otherwise no-flow grid cells can be situation in the interior. The user must specify the name (dem) of the DEM.

See Also

d8_pointer, breach_depressions_least_cost

Function Signature

def find_noflow_cells(self, dem: Raster) -> Raster: ...

find_parallel_flow

This tool can be used to find cells in a stream network grid that possess parallel flow directions based on an input D8 flow-pointer grid (d8_pointer). Because streams rarely flow in parallel for significant distances, these areas are likely errors resulting from the biased assignment of flow direction based on the D8 method.

See Also

d8_pointer

Function Signature

def find_parallel_flow(self, d8_pntr: Raster, streams: Raster) -> Raster: ...

find_patch_edge_cells

This tool will identify all grid cells situated along the edges of patches or class features within an input raster (input). Edge cells in the output raster (output) will have the patch identifier value assigned in the corresponding grid cell. All non-edge cells will be assigned zero in the output raster. Patches (or classes) are designated by positive, non-zero values in the input image. Zero-valued and NoData-valued grid cells are interpreted as background cells by the tool.

See Also

edge_proportion

Function Signature

def find_patch_edge_cells(self, raster: Raster) -> Raster: ...

find_ridges

This tool can be used to identify ridge cells in a digital elevation model (DEM). Ridge cells are those that have lower neighbours either to the north and south or the east and west. Line thinning can optionally be used to create single-cell wide ridge networks by specifying the line_thin parameter.

Function Signature

def find_ridges(self, dem: Raster, line_thin: bool = True) -> Raster: ...

flatten_lakes

This tool can be used to set the elevations contained in a set of input vector lake polygons (lakes) to a consistent value within an input (dem) digital elevation model (DEM). Lake flattening is a common pre-processing step for DEMs intended for use in hydrological applications. This algorithm determines lake elevation automatically based on the minimum perimeter elevation for each lake polygon. The minimum perimeter elevation is assumed to be the lake outlet elevation and is assigned to the entire interior region of lake polygons, excluding island geometries. Note, this tool will not provide satisfactory results if the input vector polygons contain wide river features rather than true lakes. When this is the case, the tool will lower the entire river to the elevation of its mouth, leading to the creation of an artificial gorge.

See Also

fill_depressions

Function Signature

def flatten_lakes(self, dem: Raster, lakes: Vector) -> Raster: ...

flightline_overlap

This tool can be used to map areas of overlapping flightlines in an input LiDAR (LAS) file (input). The output raster file (output) will contain the number of different flightlines that are contained within each grid cell. The user must specify the desired cell size (resolution). The flightline associated with a LiDAR point is assumed to be contained within the point's Point Source ID property. Thus, the tool essentially counts the number of different Point Source ID values among the points contained within each grid cell. If the Point Source ID property is not set, or has been lost, users may with to apply the recover_flightline_info tool prior to running flightline_overlap.

It is important to set the resolution parameter appropriately, as setting this value too high will yield the mis-characterization of non-overlap areas, and setting the resolution to low will result in fewer than expected overlap areas. An appropriate resolution size value may require experimentation, however a value that is 2-3 times the nominal point spacing has been previously recommended. The nominal point spacing can be determined using the lidar_info tool.

Note that this tool is intended to be applied to LiDAR tile data containing points that have been merged from multiple overlapping flightlines. It is commonly the case that airborne LiDAR data from each of the flightlines from a survey are merged and then tiled into 1 km2 tiles, which are the target dataset for this tool.

Like many of the LiDAR related tools, the input and output file parameters are optional. If left unspecified, the tool will locate all valid LiDAR files within the current Whitebox working directory and use these for calculation (specifying the output raster file name based on the associated input LiDAR file). This can be a helpful way to run the tool on a batch of user inputs within a specific directory.

See Also

classify_overlap_points, recover_flightline_info, lidar_info

Function Signature

def flightline_overlap(self, input_lidar: Lidar, resolution: float = 1.0) -> Raster: ...

flip_image

This tool can be used to flip, or reflect, an image (input) either vertically, horizontally, or both. The axis of reflection is specified using the direction parameter. The input image is not reflected in place; rather, the reflected image is stored in a separate output file.

Function Signature

def flip_image(self, raster: Raster, direction: str = "v") -> Raster: ...

flood_order

This tool takes an input digital elevation model (DEM) and creates an output raster where every grid cell contains the flood order of that cell within the DEM. The flood order is the sequence of grid cells that are encountered during a search, starting from the raster grid edges and the lowest grid cell, moving inward at increasing elevations. This is in fact similar to how the highly efficient Wang and Liu (2006) depression filling algorithm and the Breach Depressions (Fast) operates. The output flood order raster contains the sequential order, from lowest edge cell to the highest pixel in the DEM.

Like the fill_depressions tool, flood_order will read the entire DEM into memory. This may make the algorithm ill suited to processing massive DEMs except where the user's computer has substantial memory (RAM) resources.

Reference

Wang, L., and Liu, H. (2006). An efficient method for identifying and filling surface depressions in digital elevation models for hydrologic analysis and modelling. International Journal of Geographical Information Science, 20(2), 193-213.

See Also

fill_depressions

Function Signature

def flood_order(self, dem: Raster) -> Raster: ...

flow_accum_full_workflow

Resolves all of the depressions in a DEM, outputting a breached DEM, an aspect-aligned non-divergent flow pointer, and a flow accumulation raster.

Function Signature

def flow_accum_full_workflow(self, dem: Raster, out_type: str = "sca", log_transform: bool = False, clip: bool = False, esri_pntr: bool = False) -> Tuple[Raster, Raster, Raster]: ...

flow_length_diff

FlowLengthDiff calculates the local maximum absolute difference in downslope flowpath length, which is useful in mapping drainage divides and ridges.

See Also

max_branch_length

Function Signature

def flow_length_diff(self, d8_pointer: Raster, esri_pointer: bool = False, log_transform: bool = False) -> Raster: ...

gamma_correction

This tool performs a gamma colour correction transform on an input image (input), such that each input pixel value (zin) is mapped to the corresponding output value (zout) as:

zout = zingamma

The user must specify the value of the gamma parameter. The input image may be of either a greyscale or RGB colour composite data type.

Function Signature

def gamma_correction(self, raster: Raster, gamma_value: float = 0.5) -> Raster: ...

gaussian_contrast_stretch

This tool performs a Gaussian stretch on a raster image. The observed histogram of the input image is fitted to a Gaussian histogram, i.e. normal distribution. A histogram matching technique is used to map the values from the input image onto the output Gaussian distribution. The user must input the number of tones (num_tones) used.

This tool is related to the more general histogram_matching tool, which can be used to fit any frequency distribution to an input image, and other contrast enhancement tools such as histogram_equalization, min_max_contrast_stretch, percentage_contrast_stretch, sigmoidal_contrast_stretch, and standard_deviation_contrast_stretch.

See Also

piecewise_contrast_stretch, histogram_equalization, min_max_contrast_stretch, percentage_contrast_stretch, sigmoidal_contrast_stretch, standard_deviation_contrast_stretch, histogram_matching

Function Signature

def gaussian_contrast_stretch(self, raster: Raster, num_tones: int = 256) -> Raster: ...

gaussian_curvature

This tool calculates the Gaussian curvature from a digital elevation model (DEM). Gaussian curvature is the product of maximal and minimal curvatures, and retains values in each point of the topographic surface after its bending without breaking, stretching, and compressing (Florinsky, 2017). Gaussian curvature is measured in units of m-2.

The user must input a DEM (dem).The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

tangential_curvature, profile_curvature, plan_curvature, mean_curvature, minimal_curvature, maximal_curvature

Function Signature

def gaussian_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

gaussian_filter

This tool can be used to perform a Gaussian filter on a raster image. A Gaussian filter can be used to emphasize the longer-range variability in an image, effectively acting to smooth the image. This can be useful for reducing the noise in an image. The algorithm operates by convolving a kernel of weights with each grid cell and its neighbours in an image. The weights of the convolution kernel are determined by the 2-dimensional Gaussian (i.e. normal) curve, which gives stronger weighting to cells nearer the kernel centre. It is this characteristic that makes the Gaussian filter an attractive alternative for image smoothing and noise reduction than the mean_filter. The size of the filter is determined by setting the standard deviation parameter (sigma), which is in units of grid cells; the larger the standard deviation the larger the resulting filter kernel. The standard deviation can be any number in the range 0.5-20.

gaussian_filter works with both greyscale and red-green-blue (RGB) colour images. RGB images are decomposed into intensity-hue-saturation (IHS) and the filter is applied to the intensity channel. NoData values in the input image are ignored during processing.

Like many low-pass filters, Gaussian filtering can significantly blur well-defined edges in the input image. The edge_preserving_mean_filter and bilateral_filter offer more robust feature preservation during image smoothing. gaussian_filter is relatively slow compared to the fast_almost_gaussian_filter tool, which offers a fast-running approximatation to a Gaussian filter for larger kernel sizes.

See Also

fast_almost_gaussian_filter, mean_filter, median_filter, rgb_to_ihs

Function Signature

def gaussian_filter(self, raster: Raster, sigma: float = 0.75) -> Raster: ...

geomorphons

This tool can be used to perform a geomorphons landform classification based on an input digital elevation model (dem). The geomorphons concept is based on line-of-sight analysis for the eight topographic profiles in the cardinal directions surrounding each grid cell in the input DEM. The relative sizes of the zenith angle of a profile's maximum elevation angle (i.e. horizon angle) and the nadir angle of a profile's minimum elevation angle are then used to generate a ternary (base-3) digit: 0 when the nadir angle is less than the zenith angle, 1 when the two angles differ by less than a user-defined flatness threshold (threshold), and 2 when the nadir angle is greater than the zenith angle. A ternary number is then derived from the digits assigned to each of the eight profiles, with digits sequenced counter-clockwise from east. This ternary number forms the geomorphons code assigned to the grid cell. There are 38 = 6561 possible codes, although many of these codes are equivalent geomorphons through rotations and reflections. Some of the remaining geomorphons also rarely if ever occur in natural topography. Jasiewicz et al. (2013) identified 10 common landform types by reclassifying related geomorphons codes. The user may choose to output these common forms (forms) rather than the the raw ternary code. These landforms include:

ValueLandform Type
1Flat
2Peak (summit)
3Ridge
4Shoulder
5Spur (convex)
6Slope
7Hollow (concave)
8Footslope
9Valley
10Pit (depression)

One of the main advantages of the geomrophons method is that, being based on minimum/maximum elevation angles, the scale used to estimate the landform type at a site adapts to the surrounding terrain. In principle, choosing a large value of search distance (search) should result in identification of a landform element regardless of its scale.

An experimental feature has been added to correct for global inclination. Global inclination biases the flatness threshold angle becasue it is measured relative to the z-axis, especially in locally flat areas. Including the residuals flag "flattens" the input by converting elevation to residuals of a 2-d linear model.

Reference

Jasiewicz, J., and Stepinski, T. F. (2013). Geomorphons — a pattern recognition approach to classification and mapping of landforms. Geomorphology, 182, 147-156.

See Also

PennockLandformClass

Function Signature

def geomorphons(self, dem: Raster, search_distance: int = 1, flatness_threshold: float = 1.0, flatness_distance: int = 0, skip_distance: int = 0, output_forms: bool = True, analyze_residuals: bool = False) -> Raster: ...

hack_stream_order

This tool can be used to assign the Hack stream order to each link in a stream network. According to this common stream numbering system, the main stream is assigned an order of one. All tributaries to the main stream (i.e. the trunk) are assigned an order of two; tributaries to second-order links are assigned an order of three, and so on. The trunk or main stream of the stream network can be defined either based on the furthest upstream distance, at each bifurcation (i.e. network junction).

Stream order is often used in hydro-geomorphic and ecological studies to quantify the relative size and importance of a stream segment to the overall river system. Unlike some other stream ordering systems, e.g. Horton-Strahler stream order (strahler_stream_order) and Shreve's stream magnitude (shreve_stream_magnitude), Hack's stream ordering method increases from the catchment outlet towards the channel heads. This has the main advantage that the catchment outlet is likely to be accurately located while the channel network extent may be less accurately mapped.

The user must input a streams raster image (streams_raster) and D8 pointer image (d8_pntr). Stream cells are designated in the streams image as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer image is used to traverse the stream network and should only be created using the D8 algorithm (d8_pointer). Background cells will be assigned the NoData value in the output image, unless the zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the user should specify esri_pntr=True.

Reference

Hack, J. T. (1957). Studies of longitudinal stream profiles in Virginia and Maryland (Vol. 294). US Government Printing Office.

See Also

horton_stream_order, strahler_stream_order, shreve_stream_magnitude, topological_stream_order

Function Signature

def hack_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

heat_map

This tool is used to generate a raster heat map, or kernel density estimation surface raster from a set of vector points (input). Heat mapping is a visualization and modelling technique used to create the continuous density surface associated with the occurrences of a point phenomenon. Heat maps can therefore be used to identify point clusters by mapping the concentration of event occurrence. For example, heat maps have been used extensively to map the spatial distributions of crime events (i.e. crime mapping) or disease cases.

By default, the tool maps the density of raw occurrence events, however, the user may optionally specify an associated weights field (weights) from the point file's attribute table. When a weights field is specified, these values are simply multiplied by each of the individual components of the density estimate. Weights must be numeric.

The bandwidth parameter (--bandwidth) determines the radius of the kernel used in calculation of the density surface. There are guidelines that statisticians use in determining an appropriate bandwidth for a particular population and data set, but often this parameter is determined through experimentation. The bandwidth of the kernel is a free parameter which exhibits a strong influence on the resulting estimate.

The user must specify the kernel function type (kernel). Options include 'uniform', 'triangular', 'epanechnikov', 'quartic', 'triweight', 'tricube', 'gaussian', 'cosine', 'logistic', 'sigmoid', and 'silverman'; 'quartic' is the default kernel type. Descriptions of each function can be found at the link above.

The characteristics of the output raster (resolution and extent) are determined by one of two optional parameters, cell_size and base. If the user optionally specifies the output grid cell size parameter (cell_size) then the coordinates of the output raster extent are determined by the input vector (i.e. the bounding box) and the specified cell size determines the number of rows and columns. If the user instead specifies the optional base raster file parameter (base), the output raster's coordinates (i.e. north, south, east, west) and row and column count, and therefore, resolution, will be the same as the base file.

Reference

Geomatics (2017) QGIS Heatmap Using Kernel Density Estimation Explained, online resource: https://www.geodose.com/2017/11/qgis-heatmap-using-kernel-density.html visited 02/06/2022.

Function Signature

def heat_map(self, points: Vector, field_name: str, bandwidth: float = 0.0, cell_size: float = 0.0, base_raster: Raster = None, kernel_function: str = "quartic") -> Raster: ...

height_above_ground

This tool normalizes an input LiDAR point cloud (input) such that point z-values in the output LAS file (output) are converted from elevations to heights above the ground, specifically the height above the nearest ground-classified point. The input LAS file must have ground-classified points, otherwise the tool will return an error. The lidar_tophat_transform tool can be used to perform the normalization if a ground classification is lacking.

See Also

lidar_tophat_transform

Function Signature

def height_above_ground(self, input: Lidar) -> Lidar: ...

hexagonal_grid_from_raster_base

This tool can be used to create a hexagonal vector grid. The extent of the hexagonal grid is based on the extent of an input raster base file (base). The user must also specify the hexagonal cell width (width) and whether the hexagonal orientation (orientation) is horizontal or vertical. To use a vector base image instead of a raster, use the hexagonal_grid_from_vector_base tool.

See Also

hexagonal_grid_from_vector_base

Function Signature

def hexagonal_grid_from_raster_base(self, base: Raster, width: float, orientation: str = "h") -> Vector: ...

hexagonal_grid_from_vector_base

This tool can be used to create a hexagonal vector grid. The extent of the hexagonal grid is based on the extent of an input vector base file (base). The user must also specify the hexagonal cell width (width) and whether the hexagonal orientation (orientation) is horizontal or vertical. To use a raster base image instead of a vector, use the hexagonal_grid_from_raster_base tool.

See Also

hexagonal_grid_from_raster_base

Function Signature

def hexagonal_grid_from_vector_base(self, base: Vector, width: float, orientation: str = "h") -> Vector: ...

high_pass_filter

This tool performs a high-pass filter on a raster image. High-pass filters can be used to emphasize the short-range variability in an image. The algorithm operates essentially by subtracting the value at the grid cell at the centre of the window from the average value in the surrounding neighbourhood (i.e. window.)

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

high_pass_median_filter, mean_filter

Function Signature

def high_pass_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

high_pass_median_filter

This tool performs a high-pass median filter on a raster image. High-pass filters can be used to emphasize the short-range variability in an image. The algorithm operates essentially by subtracting the value at the grid cell at the centre of the window from the median value in the surrounding neighbourhood (i.e. window.)

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

high_pass_filter, median_filter

Function Signature

def high_pass_median_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

highest_position

This tool identifies the stack position (index) of the maximum value within a raster stack on a cell-by-cell basis. For example, if five raster images (inputs) are input to the tool, the output raster (output) would show which of the five input rasters contained the highest value for each grid cell. The index value in the output raster is the zero-order number of the raster stack, i.e. if the highest value in the stack is contained in the first image, the output value would be 0; if the highest stack value were the second image, the output value would be 1, and so on. If any of the cell values within the stack is NoData, the output raster will contain the NoData value for the corresponding grid cell. The index value is related to the order of the input images.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

lowest_position, pick_from_list

Function Signature

def highest_position(self, input_rasters: List[Raster]) -> Raster: ...

hillshade

This tool performs a hillshade operation (also called shaded relief) on an input digital elevation model (DEM). The user must input a DEM. Other parameters that must be specified include the illumination source azimuth (azimuth), or sun direction (0-360 degrees), the illumination source altitude (altitude; i.e. the elevation of the sun above the horizon, measured as an angle from 0 to 90 degrees) and the Z conversion factor (zfactor). The Z conversion factor is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z conversion factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of the raster, in radians.

The hillshade value (HS) of a DEM grid cell is calculate as:

HS = tan(s) / [1 - tan(s)2]0.5 x [sin(Alt) / tan(s) - cos(Alt) x sin(Az - a)]

where s and a are the local slope gradient and aspect (orientation) respectively and Alt and Az are the illumination source altitude and azimuth respectively. Slope and aspect are calculated using Horn's (1981) 3rd-order finate difference method.

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

See Also

hypsometrically_tinted_hillshade, multidirectional_hillshade, aspect, slope

Function Signature

def hillshade(self, dem: Raster, azimuth: float = 315.0, altitude: float = 30.0, z_factor: float = 1.0) -> Raster: ...

hillslopes

This tool decrements (lowers) the elevations of pixels within an input digital elevation model (DEM) (dem) along an input vector stream network (streams) at the sites of road (roads) intersections. In addition to the input data layers, the user must specify the output raster DEM (output), and the maximum road embankment width (width), in map units. The road width parameter is used to determine the length of channel along stream lines, at the junctions between streams and roads, that the burning (i.e. decrementing) operation occurs. The algorithm works by identifying stream-road intersection cells, then traversing along the rasterized stream path in the upstream and downstream directions by half the maximum road embankment width. The minimum elevation in each stream traversal is identified and then elevations that are higher than this value are lowered to the minimum elevation during a second stream traversal.

Reference

Lindsay JB. 2016. The practice of DEM stream burning revisited. Earth Surface Processes and Landforms, 41(5): 658–668. DOI: 10.1002/esp.3888

See Also

raster_streams_to_vector, rasterize_streams

Function Signature

def hillslopes(self, d8_pntr: Raster, streams: Raster, esri_pntr: bool = False) -> Raster: ...

histogram_equalization

This tool alters the cumulative distribution function (CDF) of a raster image to match, as closely as possible, the CDF of a uniform distribution. Histogram equalization works by first calculating the histogram of the input image. This input histogram is then converted into a CDF. Each grid cell value in the input image is then mapped to the corresponding value in the uniform distribution's CDF that has an equivalent (or as close as possible) cumulative probability value. Histogram equalization provides a very effective means of performing image contrast adjustment in an efficient manner with little need for human input.

The user must specify the name of the input image to perform histogram equalization on. The user must also specify the number of tones, corresponding to the number of histogram bins used in the analysis.

histogram_equalization is related to the histogram_matching_two_images tool (used when an image's CDF is to be matched to a reference CDF derived from a reference image). Similarly, histogram_matching, and gaussian_contrast_stretch are similarly related tools frequently used for image contrast adjustment, where the reference CDFs are uniform and Gaussian (normal) respectively.

Notes:

  • The algorithm can introduces gaps in the histograms (steps in the CDF). This is to be expected because the histogram is being distorted. This is more prevalent for integer-level images.
  • Histogram equalization is not appropriate for images containing categorical (class) data.

See Also

piecewise_contrast_stretch, histogram_matching, histogram_matching_two_images, gaussian_contrast_stretch

Function Signature

def histogram_equalization(self, raster: Raster, num_tones: int = 256) -> Raster: ...

histogram_matching

This tool alters the cumulative distribution function (CDF) of a raster image to match, as closely as possible, the CDF of a reference histogram. Histogram matching works by first calculating the histogram of the input image. This input histogram and reference histograms are each then converted into CDFs. Each grid cell value in the input image is then mapped to the corresponding value in the reference CDF that has an equivalent (or as close as possible) cumulative probability value. Histogram matching provides the most flexible means of performing image contrast adjustment.

The reference histogram must be specified to the tool in the form of a text file (.txt), provided using the histo_file flag. This file must contain two columns (delimited by a tab, space, comma, colon, or semicolon) where the first column contains the x value (i.e. the values that will be assigned to the grid cells in the output image) and the second column contains the frequency or probability. Note that 1) the file must not contain a header row, 2) each x value/frequency pair must be on a separate row. It is possible to create this type of histogram using the wide range of distribution tools available in most spreadsheet programs (e.g. Excel or LibreOffice's Calc program). You must save the file as a text-only (ASCII) file.

histogram_matching is related to the histogram_matching_two_images tool, which can be used when a reference CDF can be derived from a reference image. histogram_equalization and gaussian_contrast_stretch are similarly related tools frequently used for image contrast adjustment, where the reference CDFs are uniform and Gaussian (normal) respectively.

Notes:

  • The algorithm can introduces gaps in the histograms (steps in the CDF). This is to be expected because the histogram is being distorted. This is more prevalent for integer-level images.
  • Histogram matching is not appropriate for images containing categorical (class) data.
  • This tool is not intended for images containing RGB data. If this is the case, the colour channels should be split using the split_colour_composite tool.

See Also

histogram_matching_two_images, histogram_equalization, gaussian_contrast_stretch, split_colour_composite

Function Signature

def histogram_matching(self, image: Raster, histogram: List[List[float]], histo_is_cumulative: bool = False) -> Raster: ...

histogram_matching_two_images

This tool alters the cumulative distribution function (CDF) of a raster image to match, as closely as possible, the CDF of a reference image. Histogram matching works by first calculating the histograms of the input image (i.e. the image to be adjusted) and the reference image. These histograms are then converted into CDFs. Each grid cell value in the input image is then mapped to the corresponding value in the reference CDF that has the an equivalent (or as close as possible) cumulative probability value. A common application of this is to match the images from two sensors with slightly different responses, or images from the same sensor, but the sensor's response is known to change over time.The size of the two images (rows and columns) do not need to be the same, nor do they need to be geographically overlapping.

histogram_matching_two_images is related to the histogram_matching tool, which can be used when a reference CDF is used directly rather than deriving it from a reference image. histogram_equalization and gaussian_contrast_stretch are similarly related tools, where the reference CDFs are uniform and Gaussian (normal) respectively.

The algorithm may introduces gaps in the histograms (steps in the CDF). This is to be expected because the histograms are being distorted. This is more prevalent for integer-level images. Histogram matching is not appropriate for images containing categorical (class) data. It is also not intended for images containing RGB data, in which case, the colour channels should be split using the split_colour_composite tool.

See Also

histogram_matching, histogram_equalization, gaussian_contrast_stretch, split_colour_composite

Function Signature

def histogram_matching_two_images(self, image1: Raster, image2: Raster) -> Raster: ...

hole_proportion

This calculates the proportion of the total area of a polygon's holes (i.e. islands) relative to the area of the polygon's hull. It can be a useful measure of shape complexity, or how discontinuous a patch is. The user must specify the name of the input vector file and the output data will be contained within the input vector's database file as a new field (HOLE_PROP).

See Also

ShapeComplexityIndex, elongation_ratio, perimeter_area_ratio

Function Signature

def hole_proportion(self, input: Vector) -> Vector: ...

horizon_angle

This tool calculates the horizon angle (Sx), i.e. the maximum slope along a specified azimuth (0-360 degrees) for each grid cell in an input digital elevation model (DEM). Horizon angle is sometime referred to as the maximum upwind slope in wind exposure/sheltering studies. Positive values can be considered sheltered with respect to the azimuth and negative values are exposed. Thus, Sx is a measure of exposure to a wind from a specific direction. The algorithm works by tracing a ray from each grid cell in the direction of interest and evaluating the slope for each location in which the DEM grid is intersected by the ray. Linear interpolation is used to estimate the elevation of the surface where a ray does not intersect the DEM grid precisely at one of its nodes.

The user is able to constrain the maximum search distance (max_dist) for the ray tracing by entering a valid maximum search distance value (in the same units as the X-Y coordinates of the input raster DEM). If the maximum search distance is left blank, each ray will be traced to the edge of the DEM, which will add to the computational time.

Maximum upwind slope should not be calculated for very extensive areas over which the Earth's curvature must be taken into account. Also, this index does not take into account the deflection of wind by topography. However, averaging the horizon angle over a window of directions can yield a more robust measure of exposure, compensating for the deflection of wind from its regional average by the topography. For example, if you are interested in measuring the exposure of a landscape to a northerly wind, you could perform the following calculation:

Sx(N) = [Sx(345)+Sx(350)+Sx(355)+Sx(0)+Sx(5)+Sx(10)+Sx(15)] / 7.0

Ray-tracing is a highly computationally intensive task and therefore this tool may take considerable time to operate for larger sized DEMs. Maximum upwind slope is best displayed using a Grey scale palette that is inverted.

Horizon angle is best visualized using a white-to-black palette and rescaled from approximately -10 to 70 (see below for an example of horizon angle calculated at a 150-degree azimuth).

See Also

time_in_daylight

Function Signature

def horizon_angle(self, dem: Raster, azimuth: float = 0.0, max_dist: float = float('inf')) -> Raster: ...

horton_ratios

This function can be used to calculate Horton's so-called laws of drainage network composition for a input stream network. The user must specify an input DEM (which has been suitably hydrologically pre-processed to remove any topographic depressions) and a raster stream network. The function will output a 4-element tuple containing the bifurcation ratio (Rb), the length ratio (Rl), the area ratio (Ra), and the slope ratio (Rs). These indices are related to drainage network geometry and are used in some geomorphological analysis. The calculation of the ratios is based on the method described by Knighton (1998) Fluvial Forms and Processes: A New Perspective.

Code Example

from whitebox_workflows import WbEnvironment

# Set up the WbW environment
wbe = WbEnvironment()
wbe.verbose = True
wbe.working_directory = '/path/to/data'

# Read the inputs
dem = wbe.read_raster('DEM.tif')
streams = wbe.read_raster('streams.tif')

# Calculate the Horton ratios
(bifurcation_ratio, length_ratio, area_ratio, slope_ratio) = wbe.horton_ratios(dem, streams)

# Outputs
print(f"Bifurcation ratio (Rb): {bifurcation_ratio:.3f}")
print(f"Length ratio (Rl): {length_ratio:.3f}")
print(f"Area ratio (Ra): {area_ratio:.3f}")
print(f"Slope ratio (Rs): {slope_ratio:.3f}")

See Also

horton_stream_order

Function Signature

def horton_ratios(self, dem: Raster, streams_raster: Raster) -> Tuple[float, float, float, float]: ...

horton_stream_order

This tool can be used to assign the Horton stream order to each link in a stream network. Stream ordering is often used in hydro-geomorphic and ecological studies to quantify the relative size and importance of a stream segment to the overall river system. There are several competing stream ordering schemes. Based on to this common stream numbering system, headwater stream links are assigned an order of one. Stream order only increases downstream when two links of equal order join, otherwise the downstream link is assigned the larger of the two link orders.

Strahler order and Horton order are similar approaches to assigning stream network hierarchy. Horton stream order essentially starts with the Strahler order scheme, but subsequently replaces each of the assigned stream order value along the main trunk of the network with the order value of the outlet. The main channel is not treated differently compared with other tributaries in the Strahler ordering scheme.

The user must specify input a streams raster image (streams_raster) and D8 pointer image (d8_pntr). Stream cells are designated in the streams image as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer image is used to traverse the stream network and should only be created using the D8 algorithm (d8_pointer). Background cells will be assigned the NoData value in the output image, unless the user specifies zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the user must set esri_pntr=True.

Reference Horton, R. E. (1945). Erosional development of streams and their

drainage basins; hydrophysical approach to quantitative morphology. Geological society of America bulletin, 56(3), 275-370.

See Also

hack_stream_order, shreve_stream_magnitude, strahler_stream_order, topological_stream_order

Function Signature

def horton_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

hypsometric_analysis

This tool can be used to derive the hypsometric curve, or area-altitude curve, of one or more input digital elevation models (DEMs) ('inputs'). A hypsometric curve is a histogram or cumulative distribution function of elevations in a geographical area.

See Also

SlopeVsElevationPlot

Function Signature

def hypsometric_analysis(self, dem_rasters: List[Raster], output_html_file: str, watershed_rasters: List[Raster] = None) -> None: ...

hypsometrically_tinted_hillshade

This tool creates a hypsometrically tinted shaded relief (Swiss hillshading) image from an input digital elevation model (DEM). The tool combines a colourized version of the DEM with varying illumination provided by a hillshade image, to produce a composite relief model that can be used to visual topography for more effective interpretation of landscapes. The output of the tool is a 24-bit red-green-blue (RGB) colour image.

The user must input a DEM. Other parameters that must be specified include the illumination source azimuth (azimuth), or sun direction (0-360 degrees), the illumination source altitude (altitude; i.e. the elevation of the sun above the horizon, measured as an angle from 0 to 90 degrees), the hillshade weight (hs_weight; 0-1), image brightness (brightness; 0-1), and atmospheric effects (atmospheric; 0-1). The hillshade weight can be used to increase or subdue the relative prevalence of the hillshading effect in the output image. The image brightness parameter is used to create an overall brighter or darker version of the terrain rendering; note however, that very high values may over-saturate the well-illuminated portions of the terrain. The atmospheric effects parameter can be used to introduce a haze or atmosphere effect to the output image. It is intended to reproduce the effect of viewing mountain valley bottoms through a thicker and more dense atmosphere. Values greater than zero will introduce a slightly blue tint, particularly at lower altitudes, blur the hillshade edges slightly, and create a random haze-like speckle in lower areas. The user must also specify the Z conversion factor (zfactor). The Z conversion factor is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z conversion factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of the raster, in radians.

See Also

hillshade, multidirectional_hillshade, aspect, slope

Function Signature

def hypsometrically_tinted_hillshade(self, dem: Raster, solar_altitude: float = 45.0, hillshade_weight: float = 0.5, brightness: float = 0.5, atmospheric_effects: float = 0.0, palette: str = "atlas", reverse_palette: bool = False, full_360_mode: bool = False, z_factor: float = 1.0) -> Raster: ...

idw_interpolation

points or a fixed neighbourhood size. This tool is currently configured to perform the later only, using a FixedRadiusSearch structure. Using a fixed number of neighbours will require use of a KD-tree structure. I've been testing one Rust KD-tree library but its performance does not appear to be satisfactory compared to the FixedRadiusSearch. I will need to explore other options here.

Another change that will need to be implemented is the use of a nodal function. The original Whitebox GAT tool allows for use of a constant or a quadratic. This tool only allows the former.

Function Signature

def idw_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, weight: float = 2.0, radius: float = 0.0, min_points: int = 0, cell_size: float = 0.0, base_raster: Raster = None) -> Raster: ...

ihs_to_rgb

This tool transforms three intensity, hue, and saturation (IHS; sometimes HSI or HIS) raster images into three equivalent multispectral images corresponding with the red, green, and blue channels of an RGB composite. Intensity refers to the brightness of a color, hue is related to the dominant wavelength of light and is perceived as color, and saturation is the purity of the color (Koutsias et al., 2000). There are numerous algorithms for performing a red-green-blue (RGB) to IHS transformation. This tool uses the transformation described by Haydn (1982). Note that, based on this transformation, the input IHS values must follow the ranges:

0 < I < 1

0 < H < 2PI

0 < S < 1

The output red, green, and blue images will have values ranging from 0 to 255. The user must specify the names of the intensity, hue, and saturation images (intensity, hue, saturation). These images will generally be created using the rgb_to_ihs tool. The user must also specify the names of the output red, green, and blue images (red, green, blue). Image enhancements, such as contrast stretching, are often performed on the individual IHS components, which are then inverse transformed back in RGB components using this tool. The output RGB components can then be used to create an improved color composite image.

References

Haydn, R., Dalke, G.W. and Henkel, J. (1982) Application of the IHS color transform to the processing of multisensor data and image enhancement. Proc. of the Inter- national Symposium on Remote Sensing of Arid and Semiarid Lands, Cairo, 599-616.

Koutsias, N., Karteris, M., and Chuvico, E. (2000). The use of intensity-hue-saturation transformation of Landsat-5 Thematic Mapper data for burned land mapping. Photogrammetric Engineering and Remote Sensing, 66(7), 829-840.

See Also

rgb_to_ihs, balance_contrast_enhancement, direct_decorrelation_stretch

Function Signature

def ihs_to_rgb(self, intensity: Raster, hue: Raster, saturation: Raster) -> Tuple[Raster, Raster, Raster]: ...

image_autocorrelation

Spatial autocorrelation describes the extent to which a variable is either dispersed or clustered through space. In the case of a raster image, spatial autocorrelation refers to the similarity in the values of nearby grid cells. This tool measures the spatial autocorrelation of a raster image using the global Moran's I statistic. Moran's I varies from -1 to 1, where I = -1 indicates a dispersed, checkerboard type pattern and I = 1 indicates a clustered (smooth) surface. I = 0 occurs for a random distribution of values. image_autocorrelation computes Moran's I for the first lag only, meaning that it only takes into account the variability among the immediate neighbors of each grid cell.

The user must specify the names of one or more input raster images. In addition, the user must specify the contiguity type (contiguity; Rook's, King's, or Bishop's), which describes which neighboring grid cells are examined for the analysis. The following figure describes the available cases:

Rook's contiguity

...
010
1X1
010

Kings's contiguity

...
111
1X1
111

Bishops's contiguity

...
101
0X0
101

The tool outputs an HTML report (output) which, for each input image (input), reports the Moran's I value and the variance, z-score, and p-value (significance) under normal and randomization sampling assumptions.

Use the image_correlation tool instead when there is need to determine the correlation among multiple raster inputs.

**NoData **values in the input image are ignored during the analysis.

See Also

image_correlation, image_correlation_neighbourhood_analysis

Function Signature

def image_autocorrelation(self, rasters: List[Raster], output_html_file: str, contiguity_type: str = "bishop") -> None: ...

image_correlation

This tool can be used to estimate the Pearson product-moment correlation coefficient (r) between two or more input images (inputs). The r-value is a measure of the linear association in the variation of the input variables (images, in this case). The coefficient ranges from -1.0, indicated a perfect negative linear association, to 1.0, indicated a perfect positive linear association. An r-value of 0.0 indicates no correlation between the test variables.

Note that this index is a measure of the linear association; two variables may be strongly related by a non-linear association (e.g. a power function curve) which will lead to an apparent weak association based on the Pearson coefficient. In fact, non-linear associations are very common among spatial variables, e.g. terrain indices such as slope and contributing area. In such cases, it is advisable that the input images are transformed prior to the estimation of the Pearson coefficient, or that an alternative, non-parametric statistic be used, e.g. the Spearman rank correlation coefficient.

The user must specify the names of two or more input images (inputs). All input images must share the same grid, as the coefficient requires a comparison of a pair of images on a grid-cell-by-grid-cell basis. If more than two image names are selected, the correlation coefficient will be calculated for each pair of images and reported in the HTML output report (output) as a correlation matrix. Caution must be exercised when attempted to estimate the significance of a correlation coefficient derived from image data. The very high N-value (essentially the number of pixels in the image pair) means that even small correlation coefficients can be found to be statistically significant, despite being practically insignificant.

NoData values in either of the two input images are ignored during the calculation of the correlation between images.

See Also

image_correlation_neighbourhood_analysis, image_regression, image_autocorrelation

Function Signature

def image_correlation(self, rasters: List[Raster], output_html_file: str) -> None: ...

image_correlation_neighbourhood_analysis

This tool can be used to perform nieghbourhood-based (i.e. using roving search windows applied to each grid cell) correlation analysis on two input rasters (input1 and input2). The tool outputs a correlation value raster (output1) and a significance (p-value) raster (output2). Additionally, the user must specify the size of the search window (filter) and the correlation statistic (stat). Options for the correlation statistic include pearson, kendall, and spearman. Notice that Pearson's r is the most computationally efficient of the three correlation metrics but is unsuitable when the input distributions are non-linearly associated, in which case, either Spearman's Rho or Kendall's tau-b correlations are more suited. Both Spearman and Kendall correlations evaluate monotonic associations without assuming linearity in the relation. Kendall's tau-b is by far the most computationally expensive of the three statistics and may not be suitable to larger sized search windows.

See Also

image_correlation, image_regression

Function Signature

def image_correlation_neighbourhood_analysis(self, raster1: Raster, raster2: Raster, filter_size: int = 11, correlation_stat: str = "pearson") -> Tuple[Raster, Raster]: ...

image_regression

This tool performs a bivariate linear regression analysis on two input raster images. The first image (i1) is considered to be the independent variable while the second image (i2) is considered to be the dependent variable in the analysis. Both input images must share the same grid, as the coefficient requires a comparison of a pair of images on a grid-cell-by-grid-cell basis. The tool will output an HTML report (output) summarizing the regression model, an Analysis of Variance (ANOVA), and the significance of the regression coefficients. The regression residuals can optionally be output as a new raster image (out_residuals) and the user can also optionally specify to standardize the residuals (standardize).

Note that the analysis performs a linear regression; two variables may be strongly related by a non-linear association (e.g. a power function curve) which will lead to an apparently weak fitting regression model. In fact, non-linear relations are very common among spatial variables, e.g. terrain indices such as slope and contributing area. In such cases, it is advisable that the input images are transformed prior to the analysis.

NoData values in either of the two input images are ignored during the calculation of the correlation between images.

Example usage

import whitebox_workflow

See Also

image_correlation, image_correlation_neighbourhood_analysis

Function Signature

def image_regression(self, independent_variable: Raster, dependent_variable: Raster, output_html_file: str, standardize_residuals: bool = False, output_scattergram: bool = False, num_samples: int = 1000) -> Raster: ...

image_stack_profile

This tool can be used to plot an image stack profile (i.e. a signature) for a set of points (points) and a multispectral image stack (inputs). The tool outputs an interactive SVG line graph embedded in an HTML document. If the input points vector contains multiple points, each input point will be associated with a single line in the output plot. The order of vertices in each signature line is determined by the order of images specified in the inputs parameter. At least two input images are required to run this operation. Note that this tool does not require multispectral images as inputs; other types of data may also be used as the image stack. Also note that the input images should be single-band, continuous greytone rasters. RGB colour images are not good candidates for this tool.

If you require the raster values to be saved in the vector points file's attribute table, or if you need the raster values to be output as text, you may use the extract_raster_values_at_points tool instead.

See Also

extract_raster_values_at_points

Function Signature

def image_stack_profile(self, images: List[Raster], points: Vector, output_html_file: str) -> None: ...

impoundment_size_index

This tool can be used to calculate the impoundment size index (ISI) from a digital elevation model (DEM). The ISI is a land-surface parameter related to the size of the impoundment that would result from inserting a dam of a user-specified maximum length (damlength) into each DEM grid cell. The tool requires the user to specify the name of one or more of the possible outputs, which include the mean flooded depth (out_mean), the maximum flooded depth (out_max), the flooded volume (out_volume), the flooded area (out_area), and the dam height (out_dam_height).

Please note that this tool performs an extremely complex and computationally intensive flow-accumulation operation. As such, it may take a substantial amount of processing time and may encounter issues (including memory issues) when applied to very large DEMs. It is not necessary to pre-process the input DEM (dem) to remove topographic depressions and flat areas. The internal flow-accumulation operation will not be confounded by the presence of these features.

Reference

Lindsay, JB (2015) Modelling the spatial pattern of potential impoundment size from DEMs. Online resource: Whitebox Blog

See Also

insert_dams, stochastic_depression_analysis

Function Signature

def impoundment_size_index(self, dem: Raster, max_dam_length: float, output_mean: bool = False, output_max: bool = False, output_volume: bool = False, output_area: bool = False, output_height: bool = False) -> Tuple[Union[Raster, None], Union[Raster, None], Union[Raster, None], Union[Raster, None], Union[Raster, None]]: ...

individual_tree_detection

This tool can be used to identify points in a LiDAR point cloud that are associated with the tops of individual trees. The tool takes a LiDAR point cloud as an input (input_lidar) and it is best if the input file has been normalized using the lidar_tophat_transform function, such that points record height above the ground surface. Note that the input_lidar parameter is optional and if left unspecified the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This 'batch mode' operation is common among many of the LiDAR processing tools. Output vectors are saved to disc automatically for each processed LiDAR file when operating in batch mode and the function returns None. When an individual input_lidar Lidar object is specified, the tool will return a Vector object, containing the tree top points.

The tool will evaluate the points within a local neighbourhood around each point in the input point cloud and determine if it is the highest point within the neighbourhood. If a point is the highest local point, it will be entered into the output vector file. The neighbourhood size can vary, with higher canopy positions generally associated with larger neighbourhoods. The user specifies the min_search_radius and min_height parameters, which default to 1 m and 0 m respectively. If the min_height parameter is greater than zero, all points that are less than this value above the ground (assuming the input point cloud measures this height parameter) are ignored, which can be a useful mechanism for removing shorter trees and other vegetation from the analysis. If the user specifies the max_search_radius and max_height parameters, the search radius will be determined by linearly interpolation based on point height and the min/max search radius and height parameter values. Points that are above the max_height parameter will be processed with search neighbourhoods sized max_search_radius. If the max radius and height parameters are unspecified, they are set to the same values as the minimum radius and height parameters, i.e., the neighbourhood size does not increase with canopy height.

If the point cloud contains point classifications, it may be useful to exclude all non-vegetation points. To do this simply set the only_use_veg parameter to True. This parameter should only be set to True when you know that the input file contains point classifications, otherwise the tool may generate an empty output vector file.

See Also

lidar_tophat_transform

Function Signature

def individual_tree_detection(self, input_lidar: Lidar, min_search_radius: float = 1.0, min_height: float = 0.0, max_search_radius: Optional[float] = None, max_height: Optional[float] = None, only_use_veg = False) -> Optional[Vector]: ...

insert_dams

This tool can be used to insert dams at one or more user-specified points (dam_pts), and of a maximum length (damlength), within an input digital elevation model (DEM) (dem). This tool can be thought of as providing the impoundment feature that is calculated internally during a run of the the impoundment size index (ISI) tool for a set of points of interest. from a (DEM).

Reference

Lindsay, JB (2015) Modelling the spatial pattern of potential impoundment size from DEMs. Online resource: Whitebox Blog

See Also

impoundment_size_index, stochastic_depression_analysis

Function Signature

def insert_dams(self, dem: Raster, dam_points: Vector, dam_length: float) -> Raster: ...

integral_image_transform

This tool transforms an input raster image into an integral image, or summed area table. Integral images are the two-dimensional equivalent to a cumulative distribution function. Each pixel contains the sum of all pixels contained within the enclosing rectangle above and to the left of a pixel. Images with a very large number of grid cells will likely experience numerical overflow errors when converted to an integral image. Integral images are used in a wide variety of computer vision and digital image processing applications, including texture mapping. They allow for the efficient calculation of very large filters and are the basis of several of WhiteboxTools's image filters.

Reference

Crow, F. C. (1984, January). Summed-area tables for texture mapping. In ACM SIGGRAPH computer graphics (Vol. 18, No. 3, pp. 207-212). ACM.

Function Signature

def integral_image_transform(self, raster: Raster) -> Raster: ...

intersect

The result of the intersect vector overlay operation includes all the feature parts that occur in both input layers, excluding all other parts. It is analogous to the OR logical operator and multiplication in arithmetic. This tool is one of the common vector overlay operations in GIS. The user must specify the names of the input and overlay vector files as well as the output vector file name. The tool operates on vector points, lines, or polygon, but both the input and overlay files must contain the same VectorGeometryType.

The intersect tool is similar to the clip tool. The difference is that the overlay vector layer in a clip operation must always be polygons, regardless of whether the input layer consists of points or polylines.

The attributes of the two input vectors will be merged in the output attribute table. Note, duplicate fields should not exist between the inputs layers, as they will share a single attribute in the output (assigned from the first layer). Multipoint VectorGeometryTypes will simply contain a single output feature identifier (FID) attribute. Also, note that depending on the VectorGeometryType (polylines and polygons), Measure and Z ShapeDimension data will not be transferred to the output geometries. If the input attribute table contains fields that measure the geometric properties of their associated features (e.g. length or area), these fields will not be updated to reflect changes in geometry shape and size resulting from the overlay operation.

See Also

difference, union, symmetrical_difference, clip, erase

Function Signature

def intersect(self, input: Vector, overlay: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

isobasins

This tool can be used to divide a landscape into a group of nearly equal-sized watersheds, known as isobasins. The user must specify the name (dem) of a digital elevation model (DEM), the output raster name (output), and the isobasin target area (size) specified in units of grid cells. The DEM must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost or fill_depressions tool. Several temporary rasters are created during the execution and stored in memory of this tool.

The tool can optionally (connections) output a CSV table that contains the upstream/downstream connections among isobasins. That is, this table will identify the downstream basin of each isobasin, or will list N/A in the event that there is no downstream basin, i.e. if it drains to an edge. Additionally, the CSV file will contain information about the number of grid cells in each isobasin and the isobasin outlet's row and column number and
flow direction. The output CSV file will have the same name as the output raster, but with a *.csv file extension.

See Also

watershed, basins, breach_depressions_least_cost, fill_depressions

Function Signature

def isobasins(self, dem: Raster, target_size: float, connections: bool = False, csv_file: str = "" ) -> Raster: ...

jenson_snap_pour_points

This tool measures the depth that each grid cell in an input (dem) raster digital elevation model (DEM) lies within a sink feature, i.e. a closed topographic depression. A sink, or depression, is a bowl-like landscape feature, which is characterized by interior drainage and groundwater recharge. The depth_in_sink tool operates by differencing a filled DEM, using the same depression filling method as fill_depressions, and the original surface model.

In addition to the names of the input DEM (dem) and the output raster (output), the user must specify whether the background value (i.e. the value assigned to grid cells that are not contained within sinks) should be set to 0.0 (zero_background) Without this optional parameter specified, the tool will use the NoData value as the background value.

Reference

Antonić, O., Hatic, D., & Pernar, R. (2001). DEM-based depth in sink as an environmental estimator. Ecological Modelling, 138(1-3), 247-254.

See Also

fill_depressions

Function Signature

def jenson_snap_pour_points(self, pour_pts: Vector, streams: Raster, snap_dist: float = 0.0) -> Vector: ...

join_tables

This tool can be used to join (i.e. merge) a vector's attribute table with a second table. The user must specify the name of the vector file (and associated attribute file) as well as the primary key within the table. The primary key (pkey flag) is the field within the table that is being appended to that serves as the identifier. Additionally, the user must specify the name of a second vector from which the data appended into the first table will be derived. The foreign key (fkey flag), the identifying field within the second table that corresponds with the data contained within the primary key in the table, must be specified. Both the primary and foreign keys should either be strings (text) or integer values. Fields containing decimal values are not good candidates for keys. Lastly, the names of the field within the second file to include in the merge operation can also be input (import_field). If the import_field field is not input, all fields in the attribute table of the second file, that are not the foreign key nor FID, will be imported to the first table.

Merging works for one-to-one and many-to-one database relations. A one-to-one relations exists when each record in the attribute table corresponds to one record in the second table and each primary key is unique. Since each record in the attribute table is associated with a geospatial feature in the vector, an example of a one-to-one relation may be where the second file contains AREA and PERIMETER fields for each polygon feature in the vector. This is the most basic type of relation. A many-to-one relation would exist when each record in the first attribute table corresponds to one record in the second file and the primary key is NOT unique. Consider as an example a vector and attribute table associated with a world map of countries. Each country has one or more more polygon features in the shapefile, e.g. Canada has its mainland and many hundred large islands. You may want to append a table containing data about the population and area of each country. In this case, the COUNTRY columns in the attribute table and the second file serve as the primary and foreign keys respectively. While there may be many duplicate primary keys (all of those Canadian polygons) each will correspond to only one foreign key containing the population and area data. This is a many-to-one relation. The join_tables tool does not support one-to-many nor many-to-many relations.

See Also

merge_table_with_csv, reinitialize_attribute_table, export_table_to_csv

Function Signature

def join_tables(self, primary_vector: Vector, primary_key_field: str, foreign_vector: Vector, foreign_key_field: str, import_field: str = "") -> None: ...

k_means_clustering

This tool can be used to perform a k-means clustering operation on two or more input images (inputs), typically several bands of multi-spectral satellite imagery. The tool creates two outputs, including the classified image (output and a classification HTML report (out_html). The user must specify the number of class (classes), which should be known a priori, and the strategy for initializing class clusters (initialize). The initialization strategies include "diagonal" (clusters are initially located randomly along the multi-dimensional diagonal of spectral space) and "random" (clusters are initially located randomly throughout spectral space). The algorithm will continue updating cluster center locations with each iteration of the process until either the user-specified maximum number of iterations (max_iterations) is reached, or until a stability criteria (class_change) is achieved. The stability criteria is the percent of the total number of pixels in the image that are changed among the class values between consecutive iterations. Lastly, the user must specify the minimum allowable number of pixels in a cluster (min_class_size).

Note, each of the input images must have the same number of rows and columns and the same spatial extent because the analysis is performed on a pixel-by-pixel basis. NoData values in any of the input images will result in the removal of the corresponding pixel from the analysis.

See Also

modified_k_means_clustering

Function Signature

def k_means_clustering(self, input_rasters: List[Raster], output_html_file: str = "", num_clusters: int = 5, max_iterations: int = 10, percent_changed_threshold: float = 2.0, initialization_mode: str = "dia", min_class_size: int = 10) -> Raster: ...

k_nearest_mean_filter

This tool performs a k-nearest mean filter on a raster image. A mean filter can be used to emphasize the longer-range variability in an image, effectively acting to smooth or blur the image. This can be useful for reducing the noise in an image. The algorithm operates by calculating the average of a specified number (k) values in a moving window centred on each grid cell. The k values used in the average are those cells in the window with the nearest intensity values to that of the centre cell. As such, this is a type of edge-preserving smoothing filter. The bilateral_filter and edge_preserving_mean_filter are examples of more sophisticated edge-preserving smoothing filters.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

NoData values in the input image are ignored during filtering.

See Also

mean_filter, bilateral_filter, edge_preserving_mean_filter

Function Signature

def k_nearest_mean_filter(self, raster: Raster, filter_size_x: int = 3, filter_size_y: int = 3, k: int = 5) -> Raster: ...

kappa_index

This tool calculates the Kappa index of agreement (KIA), or Cohen's Kappa, for two categorical input raster images (input1 and input2). The KIA is a measure of inter-rater reliability (i.e. classification accuracy) and is widely applied in many fields, notably remote sensing. For example, The KIA is often used as a means of assessing the accuracy of an image classification analysis. The KIA can be interpreted as the percentage improvement that the underlying classification has over and above a random classifier (i.e. random assignment to categories). The user must specify the output HTML file (output). The input images must be of a categorical data type, i.e. contain classes. As a measure of classification accuracy, the KIA is more robust than the overall percent agreement because it takes into account the agreement occurring by chance. A KIA of 0 would indicate that the classifier is no better than random class assignment. In addition to the KIA, this tool will also output the producer's and user's accuracy, the overall accuracy, and the error matrix.

See Also

cross_tabulation

Function Signature

def kappa_index(self, class_raster: Raster, reference_raster: Raster, output_html_file: str = "") -> None: ...

ks_normality_test

This tool will perform a Kolmogorov-Smirnov (K-S) test for normality to evaluate whether the frequency distribution of values within a raster image are drawn from a Gaussian (normal) distribution. The user must specify the name of the raster image. The test can be performed optionally on the entire image or on a random sub-sample of pixel values of a user-specified size. In evaluating the significance of the test, it is important to keep in mind that given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant. Furthermore statistical significance says nothing about the practical significance of a difference.

See Also

two_sample_ks_test

Function Signature

def ks_normality_test(self, raster: Raster, output_html_file: str, num_samples: int) -> None: ...

laplacian_filter

This tool can be used to perform a Laplacian filter on a raster image. A Laplacian filter can be used to emphasize the edges in an image. As such, this filter type is commonly used in edge-detection applications. The algorithm operates by convolving a kernel of weights with each grid cell and its neighbours in an image. Four 3x3 sized filters and one 5x5 filter are available for selection. The weights of the kernels are as follows:

3x3(1)

...
0-10
-14-1
0-10

3x3(2)

...
0-10
-15-1
0-10

3x3(3)

...
-1-1-1
-18-1
-1-1-1

3x3(4)

...
1-21
-24-2
1-21

5x5(1)

.....
00-100
0-1-2-10
-1-217-2-1
0-1-2-10
00-100

5x5(2)

.....
00-100
0-1-2-10
-1-216-2-1
0-1-2-10
00-100

The user must specify the variant, including '3x3(1)', '3x3(2)', '3x3(3)', '3x3(4)', '5x5(1)', and '5x5(2)'. The user may also optionally clip the output image distribution tails by a specified amount (e.g. 1%).

See Also

prewitt_filter, sobel_filter

Function Signature

def laplacian_filter(self, raster: Raster, variant: str = "3x3(1)", clip_amount: float = 0.0) -> Raster: ...

laplacian_of_gaussians_filter

The Laplacian-of-Gaussian (LoG) is a spatial filter used for edge enhancement and is closely related to the difference-of-Gaussians filter (DiffOfGaussianFilter). The formulation of the LoG filter algorithm is based on the equation provided in the Hypermedia Image Processing Reference (HIPR) 2. The LoG operator calculates the second spatial derivative of an image. In areas where image intensity is constant, the LoG response will be zero. Near areas of change in intensity the LoG will be positive on the darker side, and negative on the lighter side. This means that at a sharp edge, or boundary, between two regions of uniform but different intensities, the LoG response will be:

  • zero at a long distance from the edge,
  • positive just to one side of the edge,
  • negative just to the other side of the edge,
  • zero at some point in between, on the edge itself.

The user may optionally choose to reflecting the data along image edges. NoData values in the input image are similarly valued in the output. The output raster is of the float data type and continuous data scale.

Reference

Fisher, R. 2004. Hypertext Image Processing Resources 2 (HIPR2). Available online: http://homepages.inf.ed.ac.uk/rbf/HIPR2/roberts.htm

See Also

DiffOfGaussianFilter

Function Signature

def laplacian_of_gaussians_filter(self, raster: Raster, sigma: float = 0.75) -> Raster: ...

las_to_ascii

This tool can be used to convert one or more LAS file, containing LiDAR data, into ASCII files. The user must specify the name(s) of the input LAS file(s) (inputs). Each input file will have a correspondingly named output file with a .csv file extension. CSV files are comma separated value files and contain tabular data with each column corresponding to a field in the table and each row a point value. Fields are separated by commas in the ASCII formatted file. The output point data, each on a separate line, will take the format:

X,Y,Z,INTENSITY,CLASS,RETURN,NUM_RETURN,SCAN_ANGLE

If the LAS file has a point format that contains RGB data, the final three columns will contain the RED, GREEN, and BLUE values respectively. Use the ascii_to_las tool to convert a text file containing LiDAR point data into a LAS file.

See Also

ascii_to_las

Function Signature

def las_to_ascii(self, input_lidar: Optional[Lidar]) -> None: ...

las_to_shapefile

This tool converts one or more LAS files into a POINT vector. When the input parameter is not specified, the tool grids all LAS files contained within the working directory. The attribute table of the output Shapefile will contain fields for the z-value, intensity, point class, return number, and number of return.

This tool can be used in place of the LasToMultipointShapefile tool when the number of points are relatively low and when the desire is to represent more than simply the x,y,z position of points. Notice however that because each point in the input LAS file will be represented as a separate record in the output Shapefile, the output file will be many time larger than the equivalent output of the LasToMultipointShapefile tool. There is also a practical limit on the total number of records that can be held in a single Shapefile and large LAS files approach this limit. In these cases, the LasToMultipointShapefile tool should be preferred instead.

See Also

LasToMultipointShapefile

Function Signature

def las_to_shapefile(self, input_lidar: Optional[Lidar], output_multipoint: bool = False) -> Vector: ...

layer_footprint_raster

This tool creates a vector polygon footprint of the area covered by an input raster grid (input). It will create a vector rectangle corresponding to the bounding box of the input raster.

If input data are irregular shape (i.e. there a boundary of NoData cells) the resulting vector will still correspond to the full grid extent, ignoring the irregular boundary. If this is not the desired effect, you may consider the minimum_bounding_envelope tool instead.

See Also

layer_footprint_vector, minimum_bounding_envelope

Function Signature

def layer_footprint_raster(self, input: Raster) -> Vector: ...

layer_footprint_vector

This tool creates a vector polygon footprint of the area covered by a vector layer. It will create a vector rectangle corresponding to the bounding box. The user must specify the name of the input file (input).

If input data are irregular shape the resulting vector will still correspond to the full grid extent, ignoring the irregular boundary. If this is not the desired effect, you should use the minimum_bounding_envelope tool instead.

See Also

layer_footprint_raster, minimum_bounding_envelope

Function Signature

def layer_footprint_vector(self, input: Vector) -> Vector: ...

lee_filter

The Lee Sigma filter is a low-pass filter used to smooth the input image (input). The user must specify the dimensions of the filter (filterx and filtery) as well as the sigma (sigma) and M (m) parameter.

Reference

Lee, J. S. (1983). Digital image smoothing and the sigma filter. Computer vision, graphics, and image processing, 24(2), 255-269.

See Also

mean_filter, gaussian_filter

Function Signature

def lee_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sigma: float = 10.0, m_value: float = 5.0) -> Raster: ...

length_of_upstream_channels

This tool calculates, for each stream grid cell in an input streams raster (streams_raster) the total length of channels upstream. The user must specify the name of a raster containing streams data (streams_raster), where stream grid cells are denoted by all positive non-zero values, and a D8 flow pointer (i.e. flow direction) raster (d8_pointer). The pointer image is used to traverse the stream network and must only be created using the D8 algorithm. Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless the user specifies zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pntr=True.

See Also

farthest_channel_head, find_main_stem

Function Signature

def length_of_upstream_channels(self, d8_pointer: Raster, streams_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

license_type

Returns the license type, WbW or WbW-Pro

lidar_block_maximum

This function superimposes a raster grid overtop of an input LiDAR point cloud (input_lidar) of a user-specified resolution (cell_size) and identifies the highest point in each block. The output raster therefore appoximates a digital surface model (DSM), representing the elevation of the ground surface in open areas and the elevations of off-terrain objects (OTOs), such as buildings and vegetation. While this function will be faster, it is recommended that if you use the lidar_digital_surface_model instead if you are trying to create a DSM. This method will generally produce better results.

Like many of the LiDAR functions, the input LiDAR point cloud (input_lidar) is optional. If an input LiDAR file is not specified, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to process a large number of LiDAR files contained within a directory. This batch processing mode enables the function to run in a more optimized parallel manner. When run in this batch mode, no output LiDAR object will be created. Instead the function will create an output file (saved to disc) with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

See Also

lidar_block_minimum, lidar_digital_surface_model, filterfilter_lidar_by_percentile_lidar

Function Signature

def lidar_block_maximum(self, input_lidar: Optional[Lidar], cell_size: float = 1.0) -> Raster: ...

lidar_block_minimum

This function superimposes a raster grid overtop of an input LiDAR point cloud (input_lidar) of a user-specified resolution (cell_size) and identifies the lowest point in each block. The output raster therefore appoximates a bare-earth digital elevation model (DEM), or a digital terrain model (DTM), although it is likely to contain several off-terrain objects (OTOs), such as buildings. Under heavier forest cover, the minimum-surface will also very likely contain some blocks that are not coinincident with the ground surface, but rather will represent the elevation of the lower position of tree trunks and low vegetation.

Like many of the LiDAR functions, the input LiDAR point cloud (input_lidar) is optional. If an input LiDAR file is not specified, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to process a large number of LiDAR files contained within a directory. This batch processing mode enables the function to run in a more optimized parallel manner. When run in this batch mode, no output LiDAR object will be created. Instead the function will create an output file (saved to disc) with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

See Also

lidar_block_maximum, filterfilter_lidar_by_percentile_lidar

Function Signature

def lidar_block_minimum(self, input_lidar: Optional[Lidar], cell_size: float = 1.0) -> Raster: ...

lidar_classify_subset

This tool classifies points within a user-specified LiDAR point cloud (base) that correspond with points in a subset cloud (subset). The subset point cloud may have been derived by filtering the original point cloud. The user must specify the names of the two input LAS files (i.e. the full and subset clouds) and the class value (subset_class) to assign the matching points. This class value will be assigned to points in the base cloud, overwriting their input class values in the output LAS file (output). Class values should be numerical (integer valued) and should follow the LAS specifications below:

Classification ValueMeaning
0Created never classified
1Unclassified
2Ground
3Low Vegetation
4Medium Vegetation
5High Vegetation
6Building
7Low Point (noise)
8Reserved
9Water
10Rail
11Road Surface
12Reserved
13Wire – Guard (Shield)
14Wire – Conductor (Phase)
15Transmission Tower
16Wire-structure Connector (e.g. Insulator)
17Bridge Deck
18High noise

The user may optionally specify a class value to be assigned to non-subset (i.e. non-matching) points (nonsubset_class) in the base file. If this parameter is not specified, output non-sutset points will have the same class value as the base file.

Function Signature

def lidar_classify_subset(self, base_lidar: Lidar, subset_lidar: Lidar, subset_class_value: int, nonsubset_class_value: int) -> Lidar: ...

lidar_colourize

This tool can be used to add red-green-blue (RGB) colour values to the points contained within an input LAS file (in_lidar), based on the pixel values of an overlapping input colour image (in_image). Ideally, the image has been acquired at the same time as the LiDAR point cloud. If this is not the case, one may expect that transient objects (e.g. cars) in both input data sets will be incorrectly coloured. The input image should overlap in extent with the LiDAR data set and the two data sets should share the same projection. You may use the lidar_tile_footprint tool to determine the spatial extent of the LAS file.

See Also

colourize_based_on_class, colourize_based_on_point_returns, lidar_tile_footprint

Function Signature

def lidar_colourize(self, in_lidar: Lidar, in_image: Raster) -> Lidar: ...

lidar_construct_vector_tin

This tool creates a vector triangular irregular network (TIN) for a set of LiDAR points (input) using a 2D Delaunay triangulation algorithm. LiDAR points may be excluded from the triangulation operation based on a number of criteria, include the point return number (returns), point classification value (exclude_cls), or a minimum (minz) or maximum (maxz) elevation.

For vector points, use the construct_vector_tin tool instead.

See Also

construct_vector_tin

Function Signature

def lidar_construct_vector_tin(self, input_lidar: Optional[Lidar], returns_included: str = "all", excluded_classes: List[int] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), max_triangle_edge_length: float = float('inf')) -> Vector: ...

lidar_digital_surface_model

This tool creates a digital surface model (DSM) from a LiDAR point cloud. A DSM reflects the elevation of the tops of all off-terrain objects (i.e. non-ground features) contained within the data set. For example, a DSM will model the canopy top as well as building roofs. This is in stark contrast to a bare-earth digital elevation model (DEM), which models the ground surface without off-terrain objects present. Bare-earth DEMs can be derived from LiDAR data by interpolating last-return points using one of the other LiDAR interpolators (e.g. lidar_tin_gridding). The algorithm used for interpolation in this tool is based on gridding a triangulation (TIN) fit to top-level points in the input LiDAR point cloud. All points in the input LiDAR data set that are below other neighbouring points, within a specified search radius (radius), and that have a large inter-point slope, are filtered out. Thus, this tool will remove the ground surface beneath as well as any intermediate points within a forest canopy, leaving only the canopy top surface to be interpolated. Similarly, building wall points and any ground points beneath roof overhangs will also be remove prior to interpolation. Note that because the ground points beneath overhead wires and utility lines are filtered out by this operation, these features tend to be appear as 'walls' in the output DSM. If these points are classified in the input LiDAR file, you may wish to filter them out before using this tool (filter_lidar_classes).

The following images show the differences between creating a DSM using the lidar_digital_surface_model and by interpolating first-return points only using the lidar_tin_gridding tool respectively. Note, the images show time_in_daylight, which is a more effective way of hillshading DSMs than the traditional hillshade method. Compare how the DSM created lidar_digital_surface_model tool (above) has far less variability in areas of tree-cover, more effectively capturing the canopy top. As well, notice how building rooftops are more extensive and straighter in the lidar_digital_surface_model DSM image. This is because this method eliminates ground returns beneath roof overhangs before the triangulation operation.

The user must specify the grid resolution of the output raster (resolution), and optionally, the name of the input LiDAR file (input) and output raster (output). Note that if an input LiDAR file (input) is not specified by the user, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to interpolate a DSM for a large number of LiDAR files. Not only does this batch processing mode enable the tool to run in a more optimized parallel manner, but it will also allow the tool to include a small buffer of points extending into adjacent tiles when interpolating an individual file. This can significantly reduce edge-effects when the output tiles are later mosaicked together. When run in this batch mode, the output file (output) also need not be specified; the tool will instead create an output file with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

Users may also exclude points from the interpolation if they fall below or above the minimum (minz) or maximum (maxz) thresholds respectively. This can be a useful means of excluding anomalously high or low points. Note that points that are classified as low points (LAS class 7) or high noise (LAS class 18) are automatically excluded from the interpolation operation.

Triangulation will generally completely fill the convex hull containing the input point data. This can sometimes result in very long and narrow triangles at the edges of the data or connecting vertices on either side of void areas. In LiDAR data, these void areas are often associated with larger waterbodies, and triangulation can result in very unnatural interpolated patterns within these areas. To avoid this problem, the user may specify a the maximum allowable triangle edge length (max_triangle_edge_length) and all grid cells within triangular facets with edges larger than this threshold are simply assigned the NoData values in the output DSM. These NoData areas can later be better dealt with using the fill_missing_data tool after interpolation.

See Also

lidar_tin_gridding, filter_lidar_classes, fill_missing_data, time_in_daylight

Function Signature

def lidar_digital_surface_model(self, input_lidar: Optional[Lidar], cell_size: float = 1.0, search_radius: float = 0.5, min_elev: float = float('-inf'), max_elev: float = float('inf'), max_triangle_edge_length: float = float('inf')) -> Raster: ...

lidar_elevation_slice

This tool can be used to either extract or classify the elevation values (z) of LiDAR points within a specified elevation range (slice). In addition to the names of the input and output LiDAR files (input and output), the user must specify the lower (minz) and upper (maxz) bounds of the elevation range. By default, the tool will only output points within the elevation slice, filtering out all points lying outside of this range. If the class parameter is used, the tool will operate by assigning a class value (inclassval) to the classification bit of points within the slice and another class value (outclassval) to those points falling outside the range.

See Also

lidar_remove_outliers, lidar_classify_subset

Function Signature

def lidar_elevation_slice(self, input: Lidar, minz: float = float('-inf'), maxz: float = float('inf'), classify: bool = False, in_class_value: int = 2, out_class_value: int = 1) -> Lidar: ...

lidar_ground_point_filter

This tool can be used to perform a slope-based classification, or filtering (i.e. removal), of non-ground points within a LiDAR point-cloud. The user must specify the name of the input and output LiDAR files (input and output). Inter-point slopes are compared between pair of points contained within local neighbourhoods of size radius. Neighbourhoods with fewer than the user-specified minimum number of points (min_neighbours) are extended until the minimum point number is equaled or exceeded. Points that are above neighbouring points by the minimum (height_threshold) and have an inter-point slope greater than the user-specifed threshold (slope_threshold) are considered non-ground points and are either optionally (classify) excluded from the output point-cloud or assigned the unclassified (value 1) class value.

Slope-based ground-point classification methods suffer from the challenge of uses a constant slope threshold under varying terrain slopes. Some researchers have developed schemes for varying the slope threshold based on underlying terrain slopes. lidar_ground_point_filter instead allow the user to optionally (slope_norm) normalize the underlying terrain (i.e. flatten the terrain) using a white top-hat transform. A constant slope threshold may then be used without contributing to poorer performance under steep topography. Note, that this option, while useful in rugged terrain, is computationally intensive. If the point-cloud is of a relatively flat terrain, this option may be excluded.

While this tool is appropriately applied to LiDAR point-clouds, the remove_off_terrain_objects tool can be used to remove off-terrain objects from rasterized LiDAR digital elevation models (DEMs).

Reference

Vosselman, G. (2000). Slope based filtering of laser altimetry data. International Archives of Photogrammetry and Remote Sensing, 33(B3/2; PART 3), 935-942.

See Also

improved_ground_point_filter, remove_off_terrain_objects

Function Signature

def lidar_ground_point_filter(self, input_lidar: Optional[Lidar], search_radius: float = 2.0, min_neighbours: int = 0, slope_threshold: float = 45.0, height_threshold: float = 1.0, classify: bool = False, slope_norm: bool = True, height_above_ground: bool = False) -> Lidar: ...

lidar_hex_bin

The practice of binning point data to form a type of 2D histogram, density plot, or what is sometimes called a heatmap, is quite useful as an alternative for the cartographic display of of very dense points sets. This is particularly the case when the points experience significant overlap at the displayed scale. The lidar_point_density tool can be used to perform binning based on a regular grid (raster output). This tool, by comparison, bases the binning on a hexagonal grid.

The tool is similar to the CreateHexagonalVectorGrid tool, however instead will create an output hexagonal grid in which each hexagonal cell possesses a COUNT attribute which specifies the number of points from an input points file (LAS file) that are contained within the hexagonal cell. The tool will also calculate the minimum and maximum elevations and intensity values and outputs these data to the attribute table.

In addition to the names of the input points file and the output Shapefile, the user must also specify the desired hexagon width (w), which is the distance between opposing sides of each hexagon. The size (s) each side of the hexagon can then be calculated as, s = w / [2 x cos(PI / 6)]. The area of each hexagon (A) is, A = 3s(w / 2). The user must also specify the orientation of the grid with options of horizontal (pointy side up) and vertical (flat side up).

See Also

vector_hex_binning, lidar_point_density, CreateHexagonalVectorGrid

Function Signature

def lidar_hex_bin(self, input_lidar: Lidar, width: float, orientation: str = "h") -> Vector: ...

lidar_hillshade

Function Signature

def lidar_hillshade(self, input: Lidar, search_radius: float = -1.0, azimuth: float = 315.0, altitude: float = 30.0) -> Lidar: ...

lidar_histogram

This tool can be used to plot a histogram of data derived from a LiDAR file. The user must specify the name of the input LAS file (input), the name of the output HTML file (output), the parameter (parameter) to be plotted, and the amount (in percent) to clip the upper and lower tails of the f requency distribution (clip). The LiDAR parameters that can be plotted using lidar_histogram include the point elevations, intensity values, scan angles, and class values.

Use the lidar_point_stats tool instead to examine the spatial distribution of LiDAR points.

See Also

lidar_point_stats

Function Signature

def lidar_histogram(self, input_lidar: Lidar, output_html_file: str, parameter: str = "elevation", clip_percent: float = 1.0) -> None: ...

lidar_idw_interpolation

This tool interpolates LiDAR files using inverse-distance weighting (IDW) scheme. The user must specify the value of the IDW weight parameter (weight). The output grid can be based on any of the stored LiDAR point parameters (parameter), including elevation (in which case the output grid is a digital elevation model, DEM), intensity, class, return number, number of returns, scan angle, RGB (colour) values, and user data values. Similarly, the user may specify which point return values (returns) to include in the interpolation, including all points, last returns (including single return points), and first returns (including single return points).

The user must specify the grid resolution of the output raster (resolution), and optionally, the name of the input LiDAR file (input) and output raster (output). Note that if an input LiDAR file (input) is not specified by the user, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to interpolate a DEM for a large number of LiDAR files. Not only does this batch processing mode enable the tool to run in a more optimized parallel manner, but it will also allow the tool to include a small buffer of points extending into adjacent tiles when interpolating an individual file. This can significantly reduce edge-effects when the output tiles are later mosaicked together. When run in this batch mode, the output file (output) also need not be specified; the tool will instead create an output file with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

Users may excluded points from the interpolation based on point classification values, which follow the LAS classification scheme. Excluded classes are specified using the exclude_cls parameter. For example, to exclude all vegetation and building classified points from the interpolation, use --exclude_cls='3,4,5,6'. Users may also exclude points from the interpolation if they fall below or above the minimum (minz) or maximum (maxz) thresholds respectively. This can be a useful means of excluding anomalously high or low points. Note that points that are classified as low points (LAS class 7) or high noise (LAS class 18) are automatically excluded from the interpolation operation.

The tool will search for the nearest input LiDAR point to each grid cell centre, up to a maximum search distance (radius). If a grid cell does not have a LiDAR point within this search distance, it will be assigned the NoData value in the output raster. In LiDAR data, these void areas are often associated with larger waterbodies. These NoData areas can later be better dealt with using the fill_missing_data tool after interpolation.

See Also

lidar_tin_gridding, lidar_nearest_neighbour_gridding, lidar_sibson_interpolation

Function Signature

def lidar_idw_interpolation(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, idw_weight: float = 1.0, search_radius: float = 2.5, excluded_classes: List[int] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Raster: ...

lidar_info

This tool can be used to print basic information about the data contained within a LAS file, used to store LiDAR data. The reported information will include including data on the header, point return frequency, and classification data and information about the variable length records (VLRs) and geokeys. If the output_html_file is specified, the function will write the output information as a HTML file that will be automatically displayed. If this parameter is unspecified, the function will return a string containing the information instead.

Function Signature

def lidar_info(self, input_lidar: Lidar, output_html_file: str = None, show_point_density: bool = True, show_vlrs: bool = True, show_geokeys: bool = True) -> str: ...

lidar_join

This tool can be used to merge multiple LiDAR LAS files into a single output LAS file. Due to their large size, LiDAR data sets are often tiled into smaller, non-overlapping tiles. Sometimes it is more convenient to combine multiple tiles together for data processing and lidar_join can be used for this purpose.

See Also

lidar_tile

Function Signature

def lidar_join(self, inputs: List[Lidar]) -> Lidar: ...

lidar_kappa

This tool performs a kappa index of agreement (KIA) analysis on the classification values of two LiDAR (LAS) files. The output report HTML file should be displayed automatically but can also be displayed afterwards in any web browser. As a measure of overall classification accuracy, the KIA is more robust than the percent agreement calculation because it takes into account the agreement occurring by random chance. In addition to the KIA, the tool will output the producer's and user's accuracy, the overall accuracy, and the error matrix. The KIA is often used as a means of assessing the accuracy of an image classification analysis; however the LidarKappaIndex tool performs the analysis on a point-to-point basis, comparing the class values of the points in one input LAS file with the corresponding nearest points in the second input LAS file.

The user must also specify the name and resolution of an output raster file, which is used to show the spatial distribution of class accuracy. Each grid cell contains the overall accuracy, i.e. the points correctly classified divided by the total number of points contained within the cell, expressed as a percentage.

Function Signature

def lidar_kappa(self, input_lidar1: Lidar, input_lidar2: Lidar, output_html_file: str, cell_size: float = 1.0, output_class_accuracy: bool = False) -> Raster: ...

lidar_nearest_neighbour_gridding

This tool grids LiDAR files using nearest-neighbour (NN) scheme, that is, each grid cell in the output image will be assigned the parameter value of the point nearest the grid cell centre. This method should not be confused for the similarly named natural-neighbour interpolation (a.k.a Sibson's method). Nearest neighbour gridding is generally regarded as a poor way of interpolating surfaces from low-density point sets and results in the creation of a Voronoi diagram. However, this method has several advantages when applied to LiDAR data. NN gridding is one of the fastest methods for generating raster surfaces from large LiDAR data sets. NN gridding is one of the few interpolation methods, along with triangulation, that will preserve vertical breaks-in-slope, such as occur at the edges of building. This characteristic can be important when using some post-processing methods, such as the remove_off_terrain_objects tool. Furthermore, because most LiDAR data sets have remarkably high point densities compared with other types of geographic data, this approach does often produce a satisfactory result; this is particularly true when the point density is high enough that there are multiple points in the majority of grid cells.

The output grid can be based on any of the stored LiDAR point parameters (parameter), including elevation (in which case the output grid is a digital elevation model, DEM), intensity, class, return number, number of returns, scan angle, RGB (colour) values, time, and user data values. Similarly, the user may specify which point return values (returns) to include in the interpolation, including all points, last returns (including single return points), and first returns (including single return points).

The user must specify the grid resolution of the output raster (resolution), and optionally, the name of the input LiDAR file (input) and output raster (output). Note that if an input LiDAR file (input) is not specified by the user, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to interpolate a DEM for a large number of LiDAR files. Not only does this batch processing mode enable the tool to run in a more optimized parallel manner, but it will also allow the tool to include a small buffer of points extending into adjacent tiles when interpolating an individual file. This can significantly reduce edge-effects when the output tiles are later mosaicked together. When run in this batch mode, the output file (output) also need not be specified; the tool will instead create an output file with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

Users may excluded points from the interpolation based on point classification values, which follow the LAS classification scheme. Excluded classes are specified using the exclude_cls parameter. For example, to exclude all vegetation and building classified points from the interpolation, use --exclude_cls='3,4,5,6'. Users may also exclude points from the interpolation if they fall below or above the minimum (minz) or maximum (maxz) thresholds respectively. This can be a useful means of excluding anomalously high or low points. Note that points that are classified as low points (LAS class 7) or high noise (LAS class 18) are automatically excluded from the interpolation operation.

The tool will search for the nearest input LiDAR point to each grid cell centre, up to a maximum search distance (radius). If a grid cell does not have a LiDAR point within this search distance, it will be assigned the NoData value in the output raster. In LiDAR data, these void areas are often associated with larger waterbodies. These NoData areas can later be better dealt with using the fill_missing_data tool after interpolation.

See Also

lidar_tin_gridding, lidar_idw_interpolation, lidar_tin_gridding, remove_off_terrain_objects, fill_missing_data

Function Signature

def lidar_nearest_neighbour_gridding(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, search_radius: float = 2.5, excluded_classes: List[int] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Raster: ...

lidar_point_density

Function Signature

def lidar_point_density(self, input_lidar: Optional[Lidar], returns_included: str = "all", cell_size: float = 1.0, search_radius: float = 2.5, excluded_classes: List[int] = None, min_elev: float = float('-inf'), max_elev: float = float('inf')) -> Raster: ...

lidar_point_stats

This tool creates several rasters summarizing the distribution of LiDAR points in a LAS data file. The user must specify the name of an input LAS file (input) and the output raster grid resolution (resolution). Additionally, the user must specify one or more of the possible output rasters to create using the various available flags, which include:

FlagMeaning
num_pointsNumber of points (returns) in each grid cell
num_pulsesNumber of pulses in each grid cell
avg_points_per_pulseAverage number of points per pulse in each grid cells
z_rangeElevation range within each grid cell
intensity_rangeIntensity range within each grid cell
predom_classPredominant class value within each grid cell

If no output raster flags are specified, all of the output rasters will be created. All output rasters will have the same base name as the input LAS file but will have a suffix that reflects the statistic type (e.g. _num_pnts, _num_pulses, _avg_points_per_pulse, etc.). Output files will be in the GeoTIFF (*.tif) file format.

When the input/output parameters are not specified, the tool works on all LAS files contained within the working directory.

Notes:

  1. The num_pulses output is actually the number of pulses with at least one return; specifically it is the sum of the early returns (first and only) in a grid cell. In areas of low reflectance, such as over water surfaces, the system may have emitted a significantly higher pulse rate but far fewer returns are observed.
  2. The memory requirement of this tool is high, particulalry if the grid resolution is fine and the spatial extent is large.

See Also

lidar_block_minimum, lidar_block_maximum

Function Signature

def lidar_point_stats(self, input_lidar: Optional[Lidar], cell_size: float = 1.0, num_points: bool = False, num_pulses: bool = False, avg_points_per_pulse: bool = False, z_range: bool = False, intensity_range: bool = False, predominant_class: bool = False) : ...

lidar_radial_basis_function_interpolation

Function Signature

def lidar_radial_basis_function_interpolation(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, num_points: int = 15, excluded_classes: List[int] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), func_type: str = "thinplatespline", poly_order: str = "none", weight: float = 0.1) -> Raster: ...

lidar_ransac_planes

This tool uses the random sample consensus (RANSAC) method to identify points within a LiDAR point cloud that belong to planar surfaces. RANSAC is a common method used in the field of computer vision to identify a subset of inlier points in a noisy data set containing abundant outlier points. Because LiDAR point clouds often contain vegetation points that do not form planar surfaces, this tool can be used to largely strip vegetation points from the point cloud, leaving behind the ground returns, buildings, and other points belonging to planar surfaces. If the classify flag is used, non-planar points will not be removed but rather will be assigned a different class (1) than the planar points (0).

The algorithm selects a random sample, of a specified size (num_samples) of the points from within the neighbourhood (radius) surrounding each LiDAR point. The sample is then used to parameterize a planar best-fit model. The distance between each neighbouring point and the plane is then evaluated; inliers are those neighbouring points within a user-specified distance threshold (threshold). Models with at least a minimum number of inlier points (model_size) are then accepted. This process of selecting models is iterated a number of user-specified times (num_iter).

One of the challenges with identifying planar surfaces in LiDAR point clouds is that these data are usually collected along scan lines. Therefore, each scan line can potentially yield a vertical planar surface, which is one reason that some vegetation points remain after applying the RANSAC plane-fitting method. To cope with this problem, the tool allows the user to specify a maximum planar slope (max_slope) parameter. Planes that have slopes greater than this threshold are rejected by the algorithm. This has the side-effect of removing building walls however.

References

Fischler MA and Bolles RC. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395.

See Also

lidar_segmentation, lidar_ground_point_filter

Function Signature

def lidar_ransac_planes(self, in_lidar: Lidar, search_radius: float = 2.0, num_iterations: int = 50, num_samples: int = 10, inlier_threshold: float = 0.15, acceptable_model_size: int = 30, max_planar_slope: float = 75.0, classify: bool = False, only_last_returns: bool = False) -> Lidar: ...

lidar_remove_outliers

This tool will filter out points from a LiDAR point cloud if the absolute elevation difference between a point and the averge elevation of its neighbourhood, calculated without the point, exceeds a threshold (elev_diff).

Function Signature

def lidar_remove_outliers(self, input: Lidar, search_radius: float = 2.0, elev_diff: float = 50.0, use_median: bool = False, classify: bool = False) -> Lidar: ...

lidar_rooftop_analysis

This tool can be used to identify roof segments in a LiDAR point cloud.

See Also

classify_buildings_in_lidar, clip_lidar_to_polygon

Function Signature

def lidar_rooftop_analysis(self, lidar_inputs: List[Lidar], building_footprints: Vector, search_radius: float = 2.0, num_iterations: int = 50, num_samples: int = 10, inlier_threshold: float = 0.15, acceptable_model_size: int = 30, max_planar_slope: float = 75.0, norm_diff_threshold: float = 2.0, azimuth: float = 180.0, altitude: float = 30.0) -> Vector: ...

lidar_segmentation

This tool can be used to segment a LiDAR point cloud based on differences in the orientation of fitted planar surfaces and point proximity. The algorithm begins by attempting to fit planar surfaces to all of the points within a user-specified radius (radius) of each point in the LiDAR data set. The planar equation is stored for each point for which a suitable planar model can be fit. A region-growing algorithm is then used to assign nearby points with similar planar models. Similarity is based on a maximum allowable angular difference (in degrees) between the two neighbouring points' plane normal vectors (norm_diff). The norm_diff parameter can therefore be thought of as a way of specifying the magnitude of edges mapped by the region-growing algorithm. By setting this value appropriately, it is possible to segment each facet of a building's roof. Segment edges for planar points may also be determined by a maximum allowable height difference (maxzdiff) between neighbouring points on the same plane. Points for which no suitable planar model can be fit are assigned to 'volume' (non-planar) segments (e.g. vegetation points) using a region-growing method that connects neighbouring points based solely on proximity (i.e. all volume points within radius distance are considered to belong to the same segment).

The resulting point cloud will have both planar segments (largely ground surfaces and building roofs and walls) and volume segments (largely vegetation). Each segment is assigned a random red-green-blue (RGB) colour in the output LAS file. The largest segment in any airborne LiDAR dataset will usually belong to the ground surface. This largest segment will always be assigned a dark-green RGB of (25, 120, 0) by the tool.

This tool uses the random sample consensus (RANSAC) method to identify points within a LiDAR point cloud that belong to planar surfaces. RANSAC is a common method used in the field of computer vision to identify a subset of inlier points in a noisy data set containing abundant outlier points. Because LiDAR point clouds often contain vegetation points that do not form planar surfaces, this tool can be used to largely strip vegetation points from the point cloud, leaving behind the ground returns, buildings, and other points belonging to planar surfaces. If the classify flag is used, non-planar points will not be removed but rather will be assigned a different class (1) than the planar points (0).

The algorithm selects a random sample, of a specified size (num_samples) of the points from within the neighbourhood (radius) surrounding each LiDAR point. The sample is then used to parameterize a planar best-fit model. The distance between each neighbouring point and the plane is then evaluated; inliers are those neighbouring points within a user-specified distance threshold (threshold). Models with at least a minimum number of inlier points (model_size) are then accepted. This process of selecting models is iterated a number of user-specified times (num_iter).

One of the challenges with identifying planar surfaces in LiDAR point clouds is that these data are usually collected along scan lines. Therefore, each scan line can potentially yield a vertical planar surface, which is one reason that some vegetation points may be assigned to planes during the RANSAC plane-fitting method. To cope with this problem, the tool allows the user to specify a maximum planar slope (max_slope) parameter. Planes that have slopes greater than this threshold are rejected by the algorithm. This has the side-effect of removing building walls however.

References

Fischler MA and Bolles RC. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395.

See Also

lidar_ransac_planes, lidar_ground_point_filter

Function Signature

def lidar_segmentation(self, in_lidar: Lidar, search_radius: float = 2.0, num_iterations: int = 50, num_samples: int = 10, inlier_threshold: float = 0.15, acceptable_model_size: int = 30, max_planar_slope: float = 75.0, norm_diff_threshold: float = 2.0, max_z_diff: float = 1.0, classes: bool = False, ground: bool = False) -> Lidar: ...

lidar_segmentation_based_filter

Function Signature

def lidar_segmentation_based_filter(self, in_lidar: Lidar, search_radius: float = 5.0, norm_diff_threshold: float = 2.0, max_z_diff: float = 1.0, classify_points: bool = False) -> Lidar: ...

lidar_shift

This tool can be used to shift the x,y,z coordinates of points within a LiDAR file. The user must specify the name of the input file (input) and the output file (output). Additionally, the user must specify the x,y,z shift values (x_shift, y_shift, z_shift). At least one non-zero shift value is needed to run the tool. Notice that shifting the x,y,z coordinates of LiDAR points is also possible using the modify_lidar tool, which can also be used for more sophisticated point property manipulation (e.g. rotations).

See Also

modify_lidar, lidar_elevation_slice, height_above_ground

Function Signature

def lidar_shift(self, input: Lidar, x_shift: float = 0.0, y_shift: float = 0.0, z_shift: float = 0.0) -> Lidar: ...

lidar_thin

Thins a LiDAR point cloud, reducing point density.

Function Signature

def lidar_thin(self, input: Lidar, resolution: float = 1.0, selection_method: str = "first", save_filtered: bool = False) -> Tuple[Lidar, Union[Lidar, None]]: ...

lidar_thin_high_density

Thins points from high density areas within a LiDAR point cloud.

Function Signature

def lidar_thin_high_density(self, input: Lidar, density: float, resolution: float = 1.0, save_filtered: bool = False) -> Tuple[Lidar, Union[Lidar, None]]: ...

lidar_tile

single LAS file. The user must specify the parameter of the tile grid, including its origin (origin_x and origin_y) and the tile width and height (width and height). Tiles containing fewer points than specified in the min_points parameter will not be output. This can be useful when tiling terrestrial LiDAR datasets because the low point density at the edges of the point cloud (i.e. most distant from the scan station) can result in poorly populated tiles containing relatively few points.

See Also

lidar_join, lidar_tile_footprint

Function Signature

def lidar_tile(self, input_lidar: Lidar, tile_width: float = 1000.0, tile_height: float = 1000.0, origin_x: float = 0.0, origin_y: float = 0.0, min_points_in_tile: int = 2, output_laz_format: bool = True) -> None: ...

lidar_tile_footprint

This tool can be used to create a vector polygon of the bounding box or convex hull of a LiDAR point cloud (i.e. LAS file). If the user specified an input file (input) and output file (output), the tool will calculate the footprint, containing all of the data points, and output this feature to a vector polygon file. If the input and output parameters are left unspecified, the tool will calculate the footprint of every LAS file contained within the working directory and output these features to a single vector polygon file. If this is the desired mode of operation, it is important to specify the working directory (wd) containing the group of LAS files; do not specify the optional input and output parameters in this case. Each polygon in the output vector will contain a LAS_NM field, specifying the source LAS file name, a NUM_PNTS field, containing the number of points within the source file, and Z_MIN and Z_MAX fields, containing the minimum and maximum elevations. This output can therefore be useful to create an index map of a large tiled LiDAR dataset.

By default, this tool identifies the axis-aligned minimum rectangular hull, or bounding box, containing the points in each of the input tiles. If the user specifies the hull flag, the tool will identify the minimum convex hull instead of the bounding box. This option is considerably more computationally intensive and will be a far longer running operation if many tiles are specified as inputs.

A note on LAZ file inputs: While WhiteboxTools does not currently support the reading and writing of the compressed LiDAR format LAZ, it is able to read LAZ file headers. This tool, when run in in the bounding box mode (rather than the convex hull mode), is able to take LAZ input files.

lidar_tile, LayerFootprint, minimum_bounding_box, minimum_convex_hull

Function Signature

def lidar_tile_footprint(self, input_lidar: Optional[Lidar], output_hulls: bool = False) -> Vector: ...

lidar_tin_gridding

This tool creates a raster grid based on a Delaunay triangular irregular network (TIN) fitted to LiDAR points. The output grid can be based on any of the stored LiDAR point parameters (parameter), including elevation (in which case the output grid is a digital elevation model, DEM), intensity, class, return number, number of returns, scan angle, RGB (colour) values, and user data values. Similarly, the user may specify which point return values (returns) to include in the interpolation, including all points, last returns (including single return points), and first returns (including single return points).

The user must specify the grid resolution of the output raster (resolution), and optionally, the name of the input LiDAR file (input) and output raster (output). Note that if an input LiDAR file (input) is not specified by the user, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to interpolate a DEM for a large number of LiDAR files. Not only does this batch processing mode enable the tool to run in a more optimized parallel manner, but it will also allow the tool to include a small buffer of points extending into adjacent tiles when interpolating an individual file. This can significantly reduce edge-effects when the output tiles are later mosaicked together. When run in this batch mode, the output file (output) also need not be specified; the tool will instead create an output file with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

Users may excluded points from the interpolation based on point classification values, which follow the LAS classification scheme. Excluded classes are specified using the exclude_cls parameter. For example, to exclude all vegetation and building classified points from the interpolation, use --exclude_cls='3,4,5,6'. Users may also exclude points from the interpolation if they fall below or above the minimum (minz) or maximum (maxz) thresholds respectively. This can be a useful means of excluding anomalously high or low points. Note that points that are classified as low points (LAS class 7) or high noise (LAS class 18) are automatically excluded from the interpolation operation.

Triangulation will generally completely fill the convex hull containing the input point data. This can sometimes result in very long and narrow triangles at the edges of the data or connecting vertices on either side of void areas. In LiDAR data, these void areas are often associated with larger waterbodies, and triangulation can result in very unnatural interpolated patterns within these areas. To avoid this problem, the user may specify a the maximum allowable triangle edge length (max_triangle_edge_length) and all grid cells within triangular facets with edges larger than this threshold are simply assigned the NoData values in the output DSM. These NoData areas can later be better dealt with using the fill_missing_data tool after interpolation.

See Also

lidar_idw_interpolation, lidar_nearest_neighbour_gridding, lidar_tin_gridding, filter_lidar_classes, fill_missing_data

Function Signature

def lidar_tin_gridding(self, input_lidar: Optional[Lidar], interpolation_parameter: str = "elevation", returns_included: str = "all", cell_size: float = 1.0, excluded_classes: List[int] = None, min_elev: float = float('-inf'), max_elev: float = float('inf'), max_triangle_edge_length: float = float('inf')) -> Raster: ...

lidar_tophat_transform

This tool performs a white top-hat transform on a LiDAR point cloud (input). A top-hat transform is a common digital image processing operation used for various tasks, such as feature extraction, background equalization, and image enhancement. When applied to a LiDAR point cloud, the white top-hat transform provides an estimate of height above ground, which is useful for modelling the vegetation canopy.

As an example, notice that the input point cloud on the top of the image below has a substantial amount of topographic variability. After applying the top-hat transform (bottom point cloud), all of this topographic variability has been removed and point elevations values effectively become height above ground.

The white top-hat transform is defined as the difference between a point's original elevation and its opening. The opening operation can be thought of as the local neighbourhood maximum of a previous local minimum surface. The user must specify the size of the neighbourhood using the radius parameter. Setting this parameter can require some experimentation. Generally, it is appropriate to use a radius of a few meters in non-urban landscapes. However, in urban areas, the radius may need to be set much larger, reflective of the size of the largest building.

If the input point cloud already has ground points classified, it may be better to use the height_above_ground, which simply measures the difference in height between each point and its nearest ground classified point within the search radius.

See Also

height_above_ground, tophat_transform, closing, opening

Function Signature

def lidar_tophat_transform(self, input: Lidar, search_radius: float) -> Lidar: ...

line_detection_filter

This tool can be used to perform one of four 3x3 line-detection filters on a raster image. These filters can be used to find one-cell-thick vertical, horizontal, or angled (135-degrees or 45-degrees) lines in an image. Notice that line-finding is a similar application to edge-detection. Common edge-detection filters include the Sobel and Prewitt filters. The kernel weights for each of the four line-detection filters are as follows:

'v' (Vertical)

...
-12-1
-12-1
-12-1

'h' (Horizontal)

...
-1-1-1
222
-1-1-1

'45' (Northeast-Southwest)

...
-1-12
-12-1
2-1-1

'135' (Northwest-Southeast)

...
2-1-1
-12-1
-1-12

The user must specify the variant, including 'v', 'h', '45', and '135', for vertical, horizontal, northeast-southwest, and northwest-southeast directions respectively. The user may also optionally clip the output image distribution tails by a specified amount (e.g. 1%).

See Also

prewitt_filter, sobel_filter

Function Signature

def line_detection_filter(self, raster: Raster, variant: str = "v", abs_values: bool = False, clip_tails: float = 0.0) -> Raster: ...

line_intersections

This tool identifies points where the features of two vector line/polygon layers intersect. The user must specify the names of two input vector line files and the output file. The output file will be a vector of POINT VectorGeometryType. If the input vectors intersect at a line segment, the beginning and end vertices of the segment will be present in the output file. A warning is issued if intersection line segments are identified during analysis. If no intersections are found between the input line files, the output file will not be saved and a warning will be issued.

Each intersection point will contain PARENT1 and PARENT2 attribute fields, identifying the instersecting features in the first and second input line files respectively. Additionally, the output attribute table will contain all of the attributes (excluding FIDs) of the two parent line features.

Function Signature

def line_intersections(self, input1: Vector, input2: Vector) -> Vector: ...

line_thinning

This image processing tool reduces all polygons in a Boolean raster image to their single-cell wide skeletons. This operation is sometimes called line thinning or skeletonization. In fact, the input image need not be truly Boolean (i.e. contain only 1's and 0's). All non-zero, positive values are considered to be foreground pixels while all zero valued cells are considered background pixels. The remove_spurs tool is useful for cleaning up an image before performing a line thinning operation.

Note: Unlike other filter-based operations in WhiteboxTools, this algorithm can't easily be parallelized because the output raster must be read and written to during the same loop.

See Also

remove_spurs, thicken_raster_line

Function Signature

def line_thinning(self, raster: Raster) -> Raster: ...

linearity_index

This tool calculates the linearity index of polygon features based on a regression analysis. The index is simply the coefficient of determination (r-squared) calculated from a regression analysis of the x and y coordinates of the exterior hull nodes of a vector polygon. Linearity index is a measure of how well a polygon can be described by a straight line. It is a related index to the elongation_ratio, but is more efficient to calculate as it does not require finding the minimum bounding box. The Pearson correlation coefficient between linearity index and the elongation ratio for a large data set of lake polygons in northern Canada was found to be 0.656, suggesting a moderate level of association between the two measures of polygon linearity. Note that this index is not useful for identifying narrow yet sinuous polygons, such as meandering rivers.

The only required input is the name of the file. The linearity values calculated for each vector polygon feature will be placed in the accompanying attribute table as a new field (LINEARITY).

See Also

elongation_ratio, patch_orientation

Function Signature

def linearity_index(self, input: Vector) -> Vector: ...

lines_to_polygons

This tool converts vector polylines into polygons. Note that this tool will close polygons that are open and will ensure that the first part of an input line is interpreted as the polygon hull and subsequent parts are considered holes. The tool does not examine input lines for line crossings (self intersections), which are topological errors.

See Also

polygons_to_lines

Function Signature

def lines_to_polygons(self, input: Vector) -> Vector: ...

list_unique_values

This tool can be used to list each of the unique values contained within a categorical field of an input vector file's attribute table. The tool outputs an HTML formatted report (output) containing a table of the unique values and their frequency of occurrence within the data. The user must specify the name of an input shapefile (input) and the name of one of the fields (field) contained in the associated attribute table. The specified field should not contained floating-point numerical data, since the number of categories will likely equal the number of records, which may be quite large. The tool effectively provides tabular output that is similar to the graphical output provided by the attribute_histogram tool, which, however, can be applied to continuous data.

See Also

attribute_histogram

Function Signature

def list_unique_values(self, input: Vector, field_name: str) -> Tuple[str, int]: ...

list_unique_values_raster

This function can be used to list each of the unique values contained within a categorical raster (raster). The tool outputs string containing a comma-seperated variable (CSV) table of the unique values and their frequency of occurrence within the data. The input raster should not contain continuous floating-point numerical data, because the number of categories will likely equal the number of pixels, which may be quite large.

See Also

list_unique_values

Function Signature

def list_unique_values_raster(self, raster: Raster) -> str: ...

long_profile

This tool can be used to create a longitudinal profile plot. A longitudinal stream profile is a plot of elevation against downstream distance. Most long profiles use distance from channel head as the distance measure. This tool, however, uses the distance to the stream network outlet cell, or mouth, as the distance measure. The reason for this difference is that while for any one location within a stream network there is only ever one downstream outlet, there are usually many upstream channel heads. Thus plotted using the traditional downstream-distance method, the same point within a network will plot in many different long profile locations, whereas it will always plot on one unique location in the distance-to-mouth method. One consequence of this difference is that the long profile will be oriented from right-to-left rather than left-to-right, as would traditionally be the case.

The tool outputs an interactive SVG line graph embedded in an HTML document (output_html_file). The user must input a D8 pointer (flow direction) raster (d8_pointer), a streams raster image (streams_raster), and a digital elevation model (dem). Stream cells are designated in the streams image as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer image is used to traverse the stream network and should only be created using the D8 algorithm (d8_pointer). The streams image should be derived using a flow accumulation based stream network extraction algorithm, also based on the D8 flow algorithm.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pointer=True.

See Also

long_profile_from_points, profile, d8_pointer

Function Signature

def long_profile(self, d8_pointer: Raster, streams_raster: Raster, dem: Raster, output_html_file: str, esri_pointer: bool = False) -> None: ...

long_profile_from_points

This tool can be used to create a longitudinal profile plot for a set of vector points (points). A longitudinal stream profile is a plot of elevation against downstream distance. Most long profiles use distance from channel head as the distance measure. This tool, however, uses the distance to the outlet cell, or mouth, as the distance measure.

The tool outputs an interactive SVG line graph embedded in an HTML document (output_html_file). The user input a D8 pointer (d8_pointer) image (flow direction), a vector points file (points), and a digital elevation model (dem). The pointer image is used to traverse the flow path issuing from each initiation point in the vector file; this pointer file should only be created using the D8 algorithm (d8_pointer).

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the esri_pointer parameter must be specified.

See Also

long_profile, profile, d8_pointer

Function Signature

def long_profile_from_points(self, d8_pointer: Raster, points: Vector, dem: Raster, output_html_file: str, esri_pointer: bool = False) -> None: ...

longest_flowpath

This tool delineates the longest flowpaths for a group of subbasins or watersheds. Flowpaths are initiated along drainage divides and continue along the D8-defined flow direction until either the subbasin outlet or DEM edge is encountered. Each input subbasin/watershed will have an associated vector flowpath in the output image. longest_flowpath is similar to the r.lfp plugin tool for GRASS GIS. The length of the longest flowpath draining to an outlet is related to the time of concentration, which is a parameter used in certain hydrological models.

The user must input the filename of a digital elevation model (DEM), a basins raster, and the output vector. The DEM must be depressionless and should have been pre-processed using the breach_depressions_least_cost or fill_depressions tool. The basins raster must contain features that are delineated by categorical (integer valued) unique identifier values. All non-NoData, non-zero valued grid cells in the basins raster are interpreted as belonging to features. In practice, this tool is usual run using either a single watershed, a group of contiguous non-overlapping watersheds, or a series of nested subbasins. These are often derived using the watershed tool, based on a series of input outlets, or the subbasins tool, based on an input stream network. If subbasins are input to longest_flowpath, each traced flowpath will include only the non-overlapping portions within nested areas. Therefore, this can be a convenient method of delineating the longest flowpath to each bifurcation in a stream network.

The output vector file will contain fields in the attribute table that identify the associated basin unique identifier (BASIN), the elevation of the flowpath source point on the divide (UP_ELEV), the elevation of the outlet point (DN_ELEV), the length of the flowpath (LENGTH), and finally, the average slope (AVG_SLOPE) along the flowpath, measured as a percent grade.

See Also

max_upslope_flowpath_length, breach_depressions_least_cost, fill_depressions, watershed, subbasins

Function Signature

def longest_flowpath(self, dem: Raster, basins: Raster) -> Vector: ...

lowest_position

This tool identifies the stack position (index) of the minimum value within a raster stack on a cell-by-cell basis. For example, if five raster images (inputs) are input to the tool, the output raster (output) would show which of the five input rasters contained the lowest value for each grid cell. The index value in the output raster is the zero-order number of the raster stack, i.e. if the lowest value in the stack is contained in the first image, the output value would be 0; if the lowest stack value were the second image, the output value would be 1, and so on. If any of the cell values within the stack is NoData, the output raster will contain the NoData value for the corresponding grid cell. The index value is related to the order of the input images.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

highest_position, pick_from_list

Function Signature

def lowest_position(self, input_rasters: List[Raster]) -> Raster: ...

majority_filter

This tool performs a range filter on an input image (input). A range filter assigns to each cell in the output grid. The range (maximum - minimum) of the values contained within a moving window centred on each grid cell.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

total_filter

Function Signature

def majority_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

map_off_terrain_objects

This tool can be used to map off-terrain objects in a digital surface model (DSM) based on cell-to-cell differences in elevations and local slopes. The algorithm works by using a region-growing operation to connect neighbouring grid cells outwards from seed cells. Two neighbouring cells are considered connected if the slope between the two cells is less than the user-specified maximum slope value (max_slope). Mapped segments that are less than the minimum feature size (min_size), in grid cells, are assigned a common background value. Note that this method of mapping off-terrain objects, and thereby separating ground cells from non-ground objects in DSMs, works best with fine-resolution DSMs that have been interpolated using a non-smoothing method, such as triangulation (TINing) or nearest-neighbour interpolation.

See Also

remove_off_terrain_objects

Function Signature

def map_off_terrain_objects(self, dem: Raster, max_slope: float = float('inf'), min_feature_size: int = 0) -> Raster: ...

max_absolute_overlay

This tool can be used to find the maximum absolute (non-negative) value in each cell of a grid from a set of input images (inputs). NoData values in any of the input images will result in a NoData pixel in the output image.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

max_overlay, min_absolute_overlay, min_overlay

Function Signature

def max_absolute_overlay(self, input_rasters: List[Raster]) -> Raster: ...

max_anisotropy_dev

Calculates the maximum anisotropy (directionality) in elevation deviation over a range of spatial scales.

Function Signature

def max_anisotropy_dev(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

max_anisotropy_dev_signature

/.//

Function Signature

def max_anisotropy_dev_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> None: ...

max_branch_length

Maximum branch length (Bmax) is the longest branch length between a grid cell's flowpath and the flowpaths initiated at each of its neighbours. It can be conceptualized as the downslope distance that a volume of water that is split into two portions by a drainage divide would travel before reuniting.

If the two flowpaths of neighbouring grid cells do not intersect, Bmax is simply the flowpath length from the starting cell to its terminus at the edge of the grid or a cell with undefined flow direction (i.e. a pit cell either in a topographic depression or at the edge of a major body of water).

The pattern of Bmax derived from a DEM should be familiar to anyone who has interpreted upslope contributing area images. In fact, Bmax can be thought of as the complement of upslope contributing area. Whereas contributing area is greatest along valley bottoms and lowest at drainage divides, Bmax is greatest at divides and lowest along channels. The two topographic attributes are also distinguished by their units of measurements; Bmax is a length rather than an area. The presence of a major drainage divide between neighbouring grid cells is apparent in a Bmax image as a linear feature, often two grid cells wide, of relatively high values. This property makes Bmax a useful land surface parameter for mapping ridges and divides.

Bmax is useful in the study of landscape structure, particularly with respect to drainage patterns. The index gives the relative significance of a specific location along a divide, with respect to the dispersion of materials across the landscape, in much the same way that stream ordering can be used to assess stream size.

See Also

flow_length_diff

Reference

Lindsay JB, Seibert J. 2013. Measuring the significance of a divide to local drainage patterns. International Journal of Geographical Information Science, 27: 1453-1468. DOI: 10.1080/13658816.2012.705289

Function Signature

def max_branch_length(self, dem: Raster, log_transform: bool = False) -> Raster: ...

max_difference_from_mean

Calculates the maximum difference from mean elevation over a range of spatial scales.

Function Signature

def max_difference_from_mean(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

max_downslope_elev_change

This tool calculates the maximum elevation drop between each grid cell and its neighbouring cells within a digital elevation model (DEM). The user must input a DEM (dem).

See Also

max_upslope_elev_change, min_downslope_elev_change, num_downslope_neighbours

Function Signature

def max_downslope_elev_change(self, raster: Raster) -> Raster: ...

max_elevation_dev_signature

max_elevation_dev_signature

Tool documentation not located.

Function Signature

def max_elevation_dev_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> None: ...

max_elevation_deviation

This tool can be used to calculate the maximum deviation from mean elevation, DEVmax (Lindsay et al. 2015) for each grid cell in a digital elevation model (DEM) across a range specified spatial scales. DEV is an elevation residual index and is essentially equivalent to a local elevation z-score. This attribute measures the relative topographic position as a fraction of local relief, and so is normalized to the local surface roughness. The multi-scaled calculation of DEVmax utilizes an integral image approach (Crow, 1984) to ensure highly efficient filtering that is invariant with filter size, which is the algorithm characteristic that allows for this densely sampled multi-scale analysis. In this way, max_elevation_deviation allows users to estimate the locally optimal scale with which to estimate DEV on a pixel-by-pixel basis. This multi-scaled version of local topographic position can reveal significant terrain characteristics and can aid with soil, vegetation, landform, and other mapping applications that depend on geomorphometric characterization.

The user must input a digital elevation model (DEM) (dem). The range of scales that are evaluated in calculating DEVmax are determined by the user-specified min_scale, max_scale, and step parameters. All filter radii between the minimum and maximum scales, increasing by step, will be evaluated. The scale parameters are in units of grid cells and specify kernel size "radii" (r), such that:

d = 2r + 1

That is, a radii of 1, 2, 3... yields a square filters of dimension (d) 3 x 3, 5 x 5, 7 x 7...

DEV is estimated at each tested filter size and every grid cell is assigned the maximum DEV value across the evaluated scales.

Two output rasters will be generated, including the magnitude (DEVmax) and a second raster the assigns each pixel the scale at which DEVmax is encountered (DEVscale). The DEVscale raster can be very useful for revealing multi-scale landscape structure.

Reference

Lindsay J, Cockburn J, Russell H. 2015. An integral image approach to performing multi-scale topographic position analysis. Geomorphology, 245: 51-61.

See Also

DevFromMeanElev, max_difference_from_mean, multiscale_elevation_percentile

Function Signature

def max_elevation_deviation(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

max_overlay

This tool can be used to find the maximum value in each cell of a grid from a set of input images (inputs). NoData values in any of the input images will result in a NoData pixel in the output image (output). It is similar to the Max mathematical tool, except that it will accept more than two input images.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

min_overlay, max_absolute_overlay

Function Signature

def max_overlay(self, input_rasters: List[Raster]) -> Raster: ...

max_procs

Determines the number of processors used by functions that are parallelized. If set to -1 (wbe.max_procs=-1), the default, all available processors will be used. To throttle tools, set max_procs to a positive whole number less than the number of system processors.

max_upslope_elev_change

a digital elevation model (DEM). The user must input DEM (dem).

See Also

max_downslope_elev_change

Function Signature

def max_upslope_elev_change(self, raster: Raster) -> Raster: ...

max_upslope_flowpath_length

This tool calculates the maximum length of the flowpaths that run through each grid cell (in map horizontal units) in an input digital elevation model (dem). The tool works by first calculating the D8 flow pointer (d8_pointer) from the input DEM. The DEM must be depressionless and should have been pre-processed using the breach_depressions_least_cost or fill_depressions tool. The user must also specify the name of output raster (output).

See Also

d8_pointer, breach_depressions_least_cost, fill_depressions, average_upslope_flowpath_length, downslope_flowpath_length, downslope_distance_to_stream

Function Signature

def max_upslope_flowpath_length(self, dem: Raster) -> Raster: ...

max_upslope_value

This tool calculates the maximum length of the flowpaths that run through each grid cell (in map horizontal units) in an input digital elevation model (dem). The tool works by first calculating the D8 flow pointer (d8_pointer) from the input DEM. The DEM must be depressionless and should have been pre-processed using the breach_depressions_least_cost or fill_depressions tool. The user must also specify the name of output raster (output).

See Also

d8_pointer, breach_depressions_least_cost, fill_depressions, average_upslope_flowpath_length, downslope_flowpath_length, downslope_distance_to_stream

Function Signature

def max_upslope_value(self, dem: Raster, values_raster: Raster) -> Raster: ...

maximal_curvature

This tool calculates the maximal curvature from a digital elevation model (DEM). Maximal curvature is the curvature of a principal section with the highest value of curvature at a given point of the topographic surface (Florinsky, 2017). The values of this curvature are unbounded, and positive values correspond to ridge positions while negative values are indicative of closed depressions (Florinsky, 2016). Maximal curvature is measured in units of m-1.

The user must input a DEM (dem). The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

minimal_curvature, tangential_curvature, profile_curvature, plan_curvature, mean_curvature, gaussian_curvature

Function Signature

def maximal_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

maximum_filter

This tool assigns each cell in the output grid. The maximum value in a moving window centred on each grid cell in the input raster (input). A maximum filter is the equivalent of the mathematical morphological dilation operator.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values, e.g. 3, 5, 7, 9... If the kernel filter size is the same in the x and y dimensions, the silent filter flag may be used instead (command-line interface only).

This tool takes advantage of the redundancy between overlapping, neighbouring filters to enhance computationally efficiency. Like most of WhiteboxTools' filters, it is also parallelized for further efficiency.

See Also

minimum_filter

Function Signature

def maximum_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

mdinf_flow_accum

This tool is used to generate a flow accumulation grid (i.e. contributing area) using the MD-infinity algorithm (Seibert and McGlynn, 2007). This algorithm is an examples of a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed to one or two downslope neighbour, i.e. flow divergence is permitted. The user must specify the name of the input digital elevation model (dem). The DEM should have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using the breach_depressions_least_cost or fill_depressions tool.

In addition to the input flow-pointer grid name, the user must specify the output type (out_type). The output flow-accumulation can be 1) specific catchment area (SCA), which is the upslope contributing area divided by the contour length (taken as the grid resolution), 2) total catchment area in square-metres, or 3) the number of upslope grid cells. The user must also specify whether the output flow-accumulation grid should be log-tranformed, i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated area. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation (log) provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index, or relative stream power index.

Grid cells possessing the NoData value in the input DEM raster are assigned the NoData value in the output flow-accumulation image. The output raster is of the float data type and continuous data scale.

Reference

Seibert, J. and McGlynn, B.L., 2007. A new triangular multiple flow direction algorithm for computing upslope areas from gridded digital elevation models. Water resources research, 43(4).

See Also

D8FlowAccumulation, FD8FlowAccumulation, quinn_flow_accumulation, qin_flow_accumulation, DInfFlowAccumulation, MDInfFlowAccumulation, rho8_pointer, breach_depressions_least_cost

Function Signature

def mdinf_flow_accum(self, dem: Raster, out_type: str = "sca", exponent: float = 1.1, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

mean_curvature

This tool calculates the mean curvature, or the rate of change in slope along a flow line, from a digital elevation model (DEM). Curvature is the second derivative of the topographic surface defined by a DEM. Profile curvature characterizes the degree of downslope acceleration or deceleration within the landscape (Gallant and Wilson, 2000). The user must input a DEM (dem). WhiteboxTools reports curvature in radians multiplied by 100 for easier interpretation because curvature values are typically very small. The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of the raster, in radians.

The algorithm uses the same formula for the calculation of plan curvature as Gallant and Wilson (2000). Profile curvature is negative for slope increasing downhill (convex flow profile, typical of upper slopes) and positive for slope decreasing downhill (concave, typical of lower slopes).

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

See Also

profile_curvature, tangential_curvature, total_curvature, slope, aspect

Function Signature

def mean_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

mean_filter

This tool performs a mean filter operation on a raster image. A mean filter, a type of low-pass filter, can be used to emphasize the longer-range variability in an image, effectively acting to smooth the image. This can be useful for reducing the noise in an image. This tool utilizes an integral image approach (Crow, 1984) to ensure highly efficient filtering that is invariant to filter size. The algorithm operates by calculating the average value in a moving window centred on each grid cell. Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values, e.g. 3, 5, 7, 9... If the kernel filter size is the same in the x and y dimensions, the silent filter flag may be used instead (command-line interface only).

Although commonly applied in digital image processing, mean filters are generally considered to be quite harsh, with respect to their impact on the image, compared to other smoothing filters such as the edge-preserving smoothing filters including the bilateral_filter, median_filter, olympic_filter, edge_preserving_mean_filter and even gaussian_filter.

This tool works with both greyscale and red-green-blue (RGB) images. RGB images are decomposed into intensity-hue-saturation (IHS) and the filter is applied to the intensity channel. NoData values in the input image are ignored during filtering. NoData values are assigned to all sites beyond the raster.

Reference

Crow, F. C. (1984, January). Summed-area tables for texture mapping. In ACM SIGGRAPH computer graphics (Vol. 18, No. 3, pp. 207-212). ACM.

See Also

bilateral_filter, edge_preserving_mean_filter, gaussian_filter, median_filter, rgb_to_ihs

Function Signature

def mean_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

median_filter

This tool performs a median filter on a raster image. Median filters, a type of low-pass filter, can be used to emphasize the longer-range variability in an image, effectively acting to smooth the image. This can be useful for reducing the noise in an image. The algorithm operates by calculating the median value (middle value in a sorted list) in a moving window centred on each grid cell. Specifically, this tool uses the efficient running-median filtering algorithm of Huang et al. (1979). The median value is not influenced by anomolously high or low values in the distribution to the extent that the average is. As such, the median filter is far less sensitive to shot noise in an image than the mean filter.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filteryflags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

Reference

Huang, T., Yang, G.J.T.G.Y. and Tang, G., 1979. A fast two-dimensional median filtering algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(1), pp.13-18.

See Also

bilateral_filter, edge_preserving_mean_filter, gaussian_filter, mean_filter

Function Signature

def median_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

medoid

This tool calculates the medoid for a series of vector features contained in a shapefile. The medoid of a two-dimensional feature is conceptually similar its centroid, or mean position, but the medoid is always a members of the input feature data set. Thus, the medoid is a measure of central tendency that is robust in the presence of outliers. If the input vector is of a POLYLINE or POLYGON VectorGeometryType, the nodes of each feature will be used to estimate the feature medoid. If the input vector is of a POINT base VectorGeometryType, the medoid will be calculated for the collection of points. While there are more than one competing method of calculating the medoid, this tool uses an algorithm that works as follows:

  1. The x-coordinate and y-coordinate of each point/node are placed into two arrays.
  2. The x- and y-coordinate arrays are then sorted and the median x-coordinate (Med X) and median y-coordinate (Med Y) are calculated.
  3. The point/node in the dataset that is nearest the point (Med X, Med Y) is identified as the medoid.

See Also

centroid_vector

Function Signature

def medoid(self, input: Vector) -> Vector: ...

merge_line_segments

Vector lines can sometimes contain two features that are connected by a shared end vertex. This tool identifies connected line features in an input vector file (input) and merges them in the output file (output). Two line features are merged if their ends are coincident, and are not coincident with any other feature (i.e. a bifurcation junction). End vertices are considered to be coincident if they are within the specified snap distance (snap).

See Also

split_with_lines

Function Signature

def merge_line_segments(self, input: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

merge_table_with_csv

This tool can be used to merge a vector's attribute table with data contained within a comma separated values (CSV) text file. CSV files stores tabular data (numbers and text) in plain-text form such that each row is a record and each column a field. Fields are typically separated by commas although the tool will also support seimi-colon, tab, and space delimited files. The user must specify the name of the vector (and associated attribute file) as well as the primary key within the table. The primary key (pkey flag) is the field within the table that is being appended to that serves as the unique identifier. Additionally, the user must specify the name of a CSV text file with either a *.csv or *.txt extension. The file must possess a header row, i.e. the first row must contain information about the names of the various fields. The foreign key (fkey flag), that is the identifying field within the CSV file that corresponds with the data contained within the primary key in the table, must also be specified. Both the primary and foreign keys should either be strings (text) or integer values. Fields containing decimal values are not good candidates for keys. Lastly, the user may optionally specify the name of a field within the CSV file to import in the merge operation (import_field flag). If this flag is not specified, all of the fields within the CSV, with the exception of the foreign key, will be appended to the attribute table.

Merging works for one-to-one and many-to-one database relations. A one-to-one relations exists when each record in the attribute table corresponds to one record in the second table and each primary key is unique. Since each record in the attribute table is associated with a geospatial feature in the vector, an example of a one-to-one relation may be where the second file contains AREA and PERIMETER fields for each polygon feature in the vector. This is the most basic type of relation. A many-to-one relation would exist when each record in the first attribute table corresponds to one record in the second file and the primary key is NOT unique. Consider as an example a vector and attribute table associated with a world map of countries. Each country has one or more more polygon features in the shapefile, e.g. Canada has its mainland and many hundred large islands. You may want to append a table containing data about the population and area of each country. In this case, the COUNTRY columns in the attribute table and the second file serve as the primary and foreign keys respectively. While there may be many duplicate primary keys (all of those Canadian polygons) each will correspond to only one foreign key containing the population and area data. This is a many-to-one relation. The join_tables tool does not support one-to-many nor many-to-many relations.

See Also

join_tables, reinitialize_attribute_table, export_table_to_csv

Function Signature

def merge_table_with_csv(self, primary_vector: Vector, primary_key_field: str, foreign_csv_filename: str, foreign_key_field: str, import_field: str = "") -> None: ...

merge_vectors

Combines two or more input vectors of the same ShapeType creating a single, new output vector. Importantly, the attribute table of the output vector will contain the ubiquitous file-specific FID, the parent file name, the parent FID, and the list of attribute fields that are shared among each of the input files. For a field to be considered common between tables, it must have the same name and field_type (i.e. data type and precision).

Overlapping features will not be identified nor handled in the merging. If you have significant areas of overlap, it is advisable to use one of the vector overlay tools instead.

The difference between merge_vectors and the Append tool is that merging takes two or more files and creates one new file containing the features of all inputs, and Append places the features of a single vector into another existing (appended) vector.

This tool only operates on vector files. Use the mosaic tool to combine raster data.

See Also

Append, mosaic

Function Signature

def merge_vectors(self, input_vectors: List[Vector]) -> Vector: ...

min_absolute_overlay

This tool can be used to find the minimum absolute (non-negative) value in each cell of a grid from a set of input images (inputs). NoData values in any of the input images will result in a NoData pixel in the output image.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

min_overlay, max_absolute_overlay, max_overlay

Function Signature

def min_absolute_overlay(self, input_rasters: List[Raster]) -> Raster: ...

min_downslope_elev_change

This tool calculates the minimum elevation drop between each grid cell and its neighbouring cells within a digital elevation model (DEM). The user must input a DEM (dem).

See Also

max_downslope_elev_change, num_downslope_neighbours

Function Signature

def min_downslope_elev_change(self, raster: Raster) -> Raster: ...

min_max_contrast_stretch

This tool performs a Gaussian stretch on a raster image. The observed histogram of the input image is fitted to a Gaussian histogram, i.e. normal distribution. A histogram matching technique is used to map the values from the input image onto the output Gaussian distribution. The user must the number of tones (num_tones) used.

This tool is related to the more general histogram_matching tool, which can be used to fit any frequency distribution to an input image, and other contrast enhancement tools such as histogram_equalization, min_max_contrast_stretch, percentage_contrast_stretch, sigmoidal_contrast_stretch, and standard_deviation_contrast_stretch.

See Also

piecewise_contrast_stretch, histogram_equalization, min_max_contrast_stretch, percentage_contrast_stretch, sigmoidal_contrast_stretch, standard_deviation_contrast_stretch, histogram_matching

Function Signature

def min_max_contrast_stretch(self, raster: Raster, min_val: float, max_val: float, num_tones: int = 256) -> Raster: ...

min_overlay

This tool can be used to find the minimum value in each cell of a grid from a set of input images (inputs). NoData values in any of the input images will result in a NoData pixel in the output image (output). It is similar to the Min mathematical tool, except that it will accept more than two input images.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

max_overlay, max_absolute_overlay, min_absolute_overlay, Min

Function Signature

def min_overlay(self, input_rasters: List[Raster]) -> Raster: ...

minimal_curvature

This tool calculates the minimal curvature from a digital elevation model (DEM). Minimal curvature is the curvature of a principal section with the lowest value of curvature at a given point of the topographic surface (Florinsky, 2017). The values of this curvature are unbounded, and positive values correspond to hills while negative values are indicative of valley positions (Florinsky, 2016). Minimal curvature is measured in units of m-1.

The user must input a DEM (dem). The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

maximal_curvature, tangential_curvature, profile_curvature, plan_curvature, mean_curvature, gaussian_curvature

Function Signature

def minimal_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

minimum_bounding_box

This tool delineates the minimum bounding box (MBB) for a group of vectors. The MBB is the smallest box to completely enclose a feature. The algorithm works by rotating the feature, calculating the axis-aligned bounding box for each rotation, and finding the box with the smallest area, length, width, or perimeter. The MBB is needed to compute several shape indices, such as the Elongation Ratio. The MinimumBoundingEnvelop tool can be used to calculate the axis-aligned bounding rectangle around each feature in a vector file.

See Also

minimum_bounding_circle, minimum_bounding_envelope, minimum_convex_hull

Function Signature

def minimum_bounding_box(self, input: Vector, min_criteria: str = "area", individual_feature_hulls: bool = True) -> Vector: ...

minimum_bounding_circle

This tool delineates the minimum bounding circle (MBC) for a group of vectors. The MBC is the smallest enclosing circle to completely enclose a feature.

See Also

minimum_bounding_box, minimum_bounding_envelope, minimum_convex_hull

Function Signature

def minimum_bounding_circle(self, input: Vector, individual_feature_hulls: bool = True) -> Vector: ...

minimum_bounding_envelope

This tool delineates the minimum bounding axis-aligned box for a group of vector features. The is the smallest rectangle to completely enclose a feature, in which the sides of the envelope are aligned with the x and y axis of the coordinate system. The minimum_bounding_box can be used instead to find the smallest possible non-axis aligned rectangular envelope.

See Also

minimum_bounding_box, minimum_bounding_circle, minimum_convex_hull

Function Signature

def minimum_bounding_envelope(self, input: Vector, individual_feature_hulls: bool = True) -> Vector: ...

minimum_convex_hull

This tool creates a vector convex polygon around vector features. The convex hull is a convex closure of a set of points or polygon vertices and can be may be conceptualized as the shape enclosed by a rubber band stretched around the point set. The convex hull has many applications and is most notably used in various shape indices. The Delaunay triangulation of a point set and its dual, the Voronoi diagram, are mathematically related to convex hulls.

See Also

minimum_bounding_box, minimum_bounding_circle, minimum_bounding_envelope

Function Signature

def minimum_convex_hull(self, input: Vector, individual_feature_hulls: bool = True) -> Vector: ...

minimum_filter

This tool assigns each cell in the output grid the minimum value in a moving window centred on each grid cell in the input raster (input). A maximum filter is the equivalent of the mathematical morphological erosion operator.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values, e.g. 3, 5, 7, 9... If the kernel filter size is the same in the x and y dimensions, the silent filter flag may be used instead (command-line interface only).

This tool takes advantage of the redundancy between overlapping, neighbouring filters to enhance computationally efficiency. Like most of WhiteboxTools' filters, it is also parallelized for further efficiency.

See Also

maximum_filter

Function Signature

def minimum_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

modified_k_means_clustering

This modified k-means algorithm is similar to that described by Mather and Koch (2011). The main difference between the traditional k-means and this technique is that the user does not need to specify the desired number of classes/clusters prior to running the tool. Instead, the algorithm initializes with a very liberal overestimate of the number of classes and then merges classes that have cluster centres that are separated by less than a user-defined threshold. The main difference between this algorithm and the ISODATA technique is that clusters can not be broken apart into two smaller clusters.

Reference

Mather, P. M., & Koch, M. (2011). Computer processing of remotely-sensed images: an introduction. John Wiley & Sons.

See Also

k_means_clustering

Function Signature

def modified_k_means_clustering(self, input_rasters: List[Raster], output_html_file: str = "", num_start_clusters: int = 1000, merge_distance: float = 1.0, max_iterations: int = 10, percent_changed_threshold: float = 2.0) -> Raster: ...

modified_shepard_interpolation

This tool interpolates vector points into a raster surface using a radial basis function (RBF) scheme.

Function Signature

def radial_basis_function_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, radius: float = 0.0, min_points: int = 0, cell_size: float = 0.0, base_raster: Raster = None, func_type: str = "thinplatespline", poly_order: str = "none", weight: float = 0.1) -> Raster: ...

modify_nodata_value

This tool can be used to modify the value of pixels containing the NoData value for an input raster image. This operation differs from the set_nodata_value tool, which sets the NoData value for an image in the image header without actually modifying pixel values. Also, set_nodata_value does not overwrite the input file, while the modify_nodata_value tool does. This tool cannot modify the input image data type, which is important to note since it may cause an unexpected behaviour if the new NoData value is negative and the input image data type is an unsigned integer type.

See Also

set_nodata_value, convert_nodata_to_zero

Function Signature

def modify_nodata_value(self, input: Raster, new_value: float = -32768.0) : ...

mosaic

This tool will create an image mosaic from one or more input image files using one of three resampling methods including, nearest neighbour, bilinear interpolation, and cubic convolution. The order of the input source image files is important. Grid cells in the output image will be assigned the corresponding value determined from the last image found in the list to possess an overlapping coordinate.

Note that when the inputs parameter is left unspecified, the tool will use all of the .tif, .tiff, .rdc, .flt, .sdat, and .dep files located in the working directory. This can be a useful way of mosaicing large number of tiles, particularly when the text string that would be required to specify all of the input tiles is longer than the allowable limit.

This is the preferred mosaicing tool to use when appending multiple images with little to no overlapping areas, e.g. tiled data. When images have significant overlap areas, users are advised to use the mosaic_with_feathering tool instead.

Resample is very similar in operation to the Mosaic tool. The Resample tool should be used when there is an existing image into which you would like to dump information from one or more source images. If the source images are more extensive than the destination image, i.e. there are areas that extend beyond the destination image boundaries, these areas will not be represented in the updated image. Grid cells in the destination image that are not overlapping with any of the input source images will not be updated, i.e. they will possess the same value as before the resampling operation. The Mosaic tool is used when there is no existing destination image. In this case, a new image is created that represents the bounding rectangle of each of the two or more input images. Grid cells in the output image that do not overlap with any of the input images will be assigned the NoData value.

See Also

mosaic_with_feathering

Function Signature

def mosaic(self, images: List[Raster], resampling_method: str = "cc") -> Raster: ...

mosaic_with_feathering

This tool will create a mosaic from two input images. It is similar in operation to the mosaic tool, however, this tool is the preferred method of mosaicing images when there is significant overlap between the images. For areas of overlap, the feathering method will calculate the output value as a weighted combination of the two input values, where the weights are derived from the squared distance of the pixel to the edge of the data in each of the input raster files. Therefore, less weight is assigned to an image's pixel value where the pixel is very near the edge of the image. Note that the distance is actually calculated to the edge of the grid and not necessarily the edge of the data, which can differ if the image has been rotated during registration. The result of this feathering method is that the output mosaic image should have very little evidence of the original image edges within the overlapping area.

Unlike the Mosaic tool, which can take multiple input images, this tool only accepts two input images. Mosaic is therefore useful when there are many, adjacent or only slightly overlapping images, e.g. for tiled data sets.

Users may want to use the histogram_matching tool prior to mosaicing if the two input images differ significantly in their radiometric properties. i.e. if image contrast differences exist.

See Also

mosaic, histogram_matching

Function Signature

def mosaic_with_feathering(self, image1: Raster, image2: Raster, resampling_method: str = "cc", distance_weight: float = 4.0) -> Raster: ...

multidirectional_hillshade

This tool performs a hillshade operation (also called shaded relief) on an input digital elevation model (DEM) with multiple sources of illumination. The user must input a DEM (dem). Other parameters that must be specified include the altitude of the illumination sources (altitude; i.e. the elevation of the sun above the horizon, measured as an angle from 0 to 90 degrees) and the Z conversion factor (zfactor). The Z conversion factor is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z conversion factor.

The hillshade value (HS) of a DEM grid cell is calculate as:

HS = tan(s) / [1 - tan(s)2]0.5 x [sin(Alt) / tan(s) - cos(Alt) x sin(Az - a)]

where s and a are the local slope gradient and aspect (orientation) respectively and Alt and Az are the illumination source altitude and azimuth respectively. Slope and aspect are calculated using Horn's (1981) 3rd-order finate difference method.

Lastly, the user must specify whether or not to use full 360-degrees of illumination sources (full_mode). When this flag is not specified, the tool will perform a weighted summation of the hillshade images from four illumination azimuth positions at 225, 270, 315, and 360 (0) degrees, given weights of 0.1, 0.4, 0.4, and 0.1 respectively. When run in the full 360-degree mode, eight illumination source azimuths are used to calculate the output at 0, 45, 90, 135, 180, 225, 270, and 315 degrees, with weights of 0.15, 0.125, 0.1, 0.05, 0.1, 0.125, 0.15, and 0.2 respectively.

Classic hillshade (Azimuth=315, Altitude=45.0)

Multi-directional hillshade (Altitude=45.0, Four-direction mode)

Multi-directional hillshade (Altitude=45.0, 360-degree mode)

See Also

hillshade, hypsometrically_tinted_hillshade, aspect, slope

Function Signature

def multidirectional_hillshade(self, dem: Raster, altitude: float = 30.0, z_factor: float = 1.0, full_360_mode: bool = False) -> Raster: ...

multipart_to_singlepart

This tool can be used to convert a vector file containing multi-part features into a vector containing only single-part features. Any multi-part polygons or lines within the input vector file will be split into separate features in the output file, each possessing their own entry in the associated attribute file. For polygon-type vectors, the user may optionally choose to exclude hole-parts from being separated from their containing polygons. That is, with the exclude_holes parameter, hole parts in the input vector will continue to belong to their enclosing polygon in the output vector. The tool will also convert MultiPoint Shapefiles into single Point vectors.

See Also

single_part_to_multipart

Function Signature

def multipart_to_singlepart(self, input: Vector, exclude_holes: bool = False) -> Vector: ...

multiply_overlay

This tool multiplies a stack of raster images (inputs) on a pixel-by-pixel basis. This tool is particularly well suited when you need to create a masking layer from the combination of several Boolean rasters, i.e. for constraint mapping applications. NoData values in any of the input images will result in a NoData pixel in the output image (output).

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

sum_overlay, weighted_sum

Function Signature

def multiply_overlay(self, input_rasters: List[Raster]) -> Raster: ...

multiscale_elevation_percentile

This tool calculates the most elevation percentile (EP) across a range of spatial scales. EP is a measure of local topographic position (LTP) and expresses the vertical position for a digital elevation model (DEM) grid cell (z0) as the percentile of the elevation distribution within the filter window, such that:

EP = counti∈C(zi > z0) x (100 / nC)

where z0 is the elevation of the window's center grid cell, zi is the elevation of cell i contained within the neighboring set C, and nC is the number of grid cells contained within the window.

EP is unsigned and expressed as a percentage, bound between 0% and 100%. This tool outputs two rasters, the multiscale EP magnitude (out_mag) and the scale at which the most extreme EP value occurs (out_scale). The magnitude raster is the most extreme EP value (i.e. the furthest from 50%) for each grid cell encountered within the tested scales of EP.

Quantile-based estimates (e.g., the median and interquartile range) are often used in nonparametric statistics to provide data variability estimates without assuming the distribution is normal. Thus, EP is largely unaffected by irregularly shaped elevation frequency distributions or by outliers in the DEM, resulting in a highly robust metric of LTP. In fact, elevation distributions within small to medium sized neighborhoods often exhibit skewed, multimodal, and non-Gaussian distributions, where the occurrence of elevation errors can often result in distribution outliers. Thus, based on these statistical characteristics, EP is considered one of the most robust representation of LTP.

The algorithm implemented by this tool uses the relatively efficient running-histogram filtering algorithm of Huang et al. (1979). Because most DEMs contain floating point data, elevation values must be rounded to be binned. The sig_digits parameter is used to determine the level of precision preserved during this binning process. The algorithm is parallelized to further aid with computational efficiency.

Experience with multiscale EP has shown that it is highly variable at shorter scales and changes more gradually at broader scales. Therefore, a nonlinear scale sampling interval is used by this tool to ensure that the scale sampling density is higher for short scale ranges and coarser at longer tested scales, such that:

ri = rL + [step × (i - rL)]p

Where ri is the filter radius for step i and p is the nonlinear scaling factor (step_nonlinearity) and a step size (step) of step.

References

Newman, D. R., Lindsay, J. B., and Cockburn, J. M. H. (2018). Evaluating metrics of local topographic position for multiscale geomorphometric analysis. Geomorphology, 312, 40-50.

Huang, T., Yang, G.J.T.G.Y. and Tang, G., 1979. A fast two-dimensional median filtering algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(1), pp.13-18.

See Also

elevation_percentile, max_elevation_deviation, max_difference_from_mean

Function Signature

def multiscale_elevation_percentile(self, dem: Raster, num_significant_digits: int = 3, min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0) -> Tuple[Raster, Raster]: ...

multiscale_roughness

/

Function Signature

def multiscale_roughness(self, dem: Raster, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> Tuple[Raster, Raster]: ...

multiscale_roughness_signature

/

Function Signature

def multiscale_roughness_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 1, max_scale: int = 100, step_size: int = 1) -> None: ...

multiscale_std_dev_normals

This tool can be used to map the spatial pattern of maximum spherical standard deviation (σs max; out_mag), as well as the scale at which maximum spherical standard deviation occurs (rmax; out_scale), for each grid cell in an input DEM (dem). This serves as a multi-scale measure of surface roughness, or topographic complexity. The spherical standard deviation (σs) is a measure of the angular spread among n unit vectors and is defined as:

σs = √[-2ln(R / N)] × 180 / π

Where R is the resultant vector length and is derived from the sum of the x, y, and z components of each of the n normals contained within a filter kernel, which designates a tested spatial scale. Each unit vector is a 3-dimensional measure of the surface orientation and slope at each grid cell center. The maximum spherical standard deviation is:

σs max=max{σs(r):r=rL...rU},

Experience with roughness scale signatures has shown that σs max is highly variable at shorter scales and changes more gradually at broader scales. Therefore, a nonlinear scale sampling interval is used by this tool to ensure that the scale sampling density is higher for short scale ranges and coarser at longer tested scales, such that:

ri = rL + [step × (i - rL)]p

Where ri is the filter radius for step i and p is the nonlinear scaling factor (step_nonlinearity) and a step size (step) of step.

Use the spherical_std_dev_of_normals tool if you need to calculate σs for a single scale.

Reference

JB Lindsay, DR Newman, and A Francioni. 2019 Scale-Optimized Surface Roughness for Topographic Analysis. Geosciences, 9(322) doi: 10.3390/geosciences9070322.

See Also

spherical_std_dev_of_normals, multiscale_std_dev_normals_signature, multiscale_roughness

Function Signature

def multiscale_std_dev_normals(self, dem: Raster, min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0, html_signature_file: str = "") -> Tuple[Raster, Raster]: ...

multiscale_std_dev_normals_signature

/

Function Signature

def multiscale_std_dev_normals_signature(self, dem: Raster, points: Vector, output_html_file: str, min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0) -> None: ...

multiscale_topographic_position_image

This tool creates a multiscale topographic position (MTP) image (see here for an example) from three DEVmax rasters of differing spatial scale ranges. Specifically, multiscale_topographic_position_image takes three DEVmax magnitude rasters, created using the max_elevation_deviation tool, as inputs. The three inputs should correspond to the elevation deviations in the local (local), meso (meso), and broad (broad) scale ranges and will be forced into the blue, green, and red colour components of the colour composite output (output) raster. The image lightness value (lightness) controls the overall brightness of the output image, as depending on the topography and scale ranges, these images can appear relatively dark. Higher values result in brighter, more colourful output images.

The user may optionally specify an input hillshade raster. When specified, the hillshade will be used to provide a shaded-relief overlaid on top of the coloured multi-scale information, providing a very effective visualization. Any hillshade image may be used for this purpose, but we have found that multi-directional hillshade (multidirectional_hillshade), and specifically those derived using the 360-degree option, can be most effective for this application. However, experimentation is likely needed to find the optimal for each unique data set.

The output images can take some training to interpret correctly and a detailed explanation can be found in Lindsay et al. (2015). Sites within the landscape that occupy prominent topographic positions, either low-lying or elevated, will be apparent by their bright colouring in the MTP image. Those that are coloured more strongly in the blue are promient at the local scale range; locations that are more strongly green coloured are promient at the meso scale; and bright reds in the MTP image are associated with broad-scale landscape prominence. Of course, combination colours are also possible when topography is elevated or low-lying across multiple scale ranges. For example, a yellow area would indicated a site of prominent topographic position across the meso and broadest scale ranges.

Reference

Lindsay J, Cockburn J, Russell H. 2015. An integral image approach to performing multi-scale topographic position analysis. Geomorphology, 245: 51-61.

See Also

max_elevation_deviation

Function Signature

def multiscale_topographic_position_image(self, local: Raster, meso: Raster, broad: Raster, lightness: float = 1.2) -> Raster: ...

narrowness_index

This tools calculates a type of shape narrowness index (NI) for raster objects. The index is equal to:

NI = A / (πMD2)

where A is the patch area and MD is the maximum distance-to-edge of the patch. Circular-shaped patches will have a narrowness index near 1.0, while more narrow patch shapes will have higher index values. The index may be conceptualized as the ratio of the patch area to the area of the largest contained circle, although in practice the circle defined by the radius of the maximum distance-to-edge will often fall outside the patch boundaries.

Objects in the input raster (input) are designated by their unique identifiers. Identifier values must be positive, non-zero whole numbers. It is quite common for identifiers to be set using the clump tool applied to some kind of thresholded raster.

See Also

linearity_index, elongation_ratio, clump

Function Signature

def narrowness_index(self, raster: Raster) -> Raster: ...

natural_neighbour_interpolation

This tool can be used to interpolate a set of input vector points (input) onto a raster grid using Sibson's (1981) natural neighbour method. Similar to inverse-distance-weight interpolation (idw_interpolation), the natural neighbour method performs a weighted averaging of nearby point values to estimate the attribute (field) value at grid cell intersections in the output raster (output). However, the two methods differ quite significantly in the way that neighbours are identified and in the weighting scheme. First, natural neigbhour identifies neighbours to be used in the interpolation of a point by finding the points connected to the estimated value location in a Delaunay triangulation, that is, the so-called natural neighbours. This approach has the main advantage of not having to specify an arbitrary search distance or minimum number of nearest neighbours like many other interpolators do. Weights in the natural neighbour scheme are determined using an area-stealing approach, whereby the weight assigned to a neighbour's value is determined by the proportion of its Voronoi polygon that would be lost by inserting the interpolation point into the Voronoi diagram. That is, inserting the interpolation point into the Voronoi diagram results in the creation of a new polygon and shrinking the sizes of the Voronoi polygons associated with each of the natural neighbours. The larger the area by which a neighbours polygon is reduced through the insertion, relative to the polygon of the interpolation point, the greater the weight given to the neighbour point's value in the interpolation. Interpolation weights sum to one because the sum of the reduced polygon areas must account for the entire area of the interpolation points polygon.

The user must specify the attribute field containing point values (field). Alternatively, if the input Shapefile contains z-values, the interpolation may be based on these values (use_z). Either an output grid resolution (cell_size) must be specified or alternatively an existing base file (base) can be used to determine the output raster's (output) resolution and spatial extent. Natural neighbour interpolation generally produces a satisfactorily smooth surface within the region of data points but can produce spurious breaks in the surface outside of this region. Thus, it is recommended that the output surface be clipped to the convex hull of the input points (clip).

Reference

Sibson, R. (1981). "A brief description of natural neighbor interpolation (Chapter 2)". In V. Barnett (ed.). Interpolating Multivariate Data. Chichester: John Wiley. pp. 21–36.

See Also

idw_interpolation, NearestNeighbourGridding

Function Signature

def natural_neighbour_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Raster = None, clip_to_hull: bool = True) -> Raster: ...

nearest_neighbour_interpolation

Creates a raster grid based on a set of vector points and assigns grid values using the nearest neighbour.

Function Signature

def nearest_neighbour_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Raster = None, max_dist: float = float('inf')) -> Raster: ...

new_lidar

Creates a new Lidar object using an input LidarHeader.

Parameters

  • header: LidarHeader - a Lidar header object.

new_raster

Creates a new in-memory Raster object based on a RasterConfig.

Parameters

  • configs: RasterConfigs - An in-memory raster configs object. This can be copied from an existing file, or created manually.

new_raster_from_base_raster

This tool can be used to create a new raster with the same coordinates and dimensions (i.e. rows and columns) as an existing base image. The user must input a base file (base), the value that the new grid will be filled with (out_val; NoData if unspecified), and the data type (data_type flag; options include 'double', 'float', and 'integer').

See Also

new_raster_from_base_vector, raster_cell_assignment

Function Signature

def new_raster_from_base_raster(self, base: Raster, out_val: float = float('nan'), data_type: str = "float") -> Raster: ...

new_raster_from_base_vector

This tool can be used to create a new raster with the same spatial extent as an input vector file (base). The user must specify the name of the base file, the value that the new grid will be filled with (out_val; NoData if unspecified), and the data type (data_type flag; options include 'double', 'float', and 'integer'). It is also necessary to specify a value for the optional grid cell size (cell_size) input parameter.

See Also

new_raster_from_base_raster, raster_cell_assignment

Function Signature

def new_raster_from_base_vector(self, base: Vector, cell_size: float, out_val: float = float('nan'), data_type: str = "float") -> Raster: ...

new_vector

Creates a new in-memory Vector object.

Parameters

  • vector_type: VectorGeometryType - Determines what type of vector data this object can hold. Backed by the Shapefile, the Vector is limited to a single VectorGeometryType.
  • attributes: List[AttributeField] - A list containing the attributes held within the attribute table. Default is None.
  • proj: str - The projection string to be written to the associated *.prj file when written to disc. Default is the empty string.

normal_vectors

Calculates normal vectors for points within a LAS file and stores these data (XYZ vector components) in the RGB field.

Function Signature

def normal_vectors(self, input: Lidar, search_radius: float = -1.0) -> Lidar: ...

normalize_lidar

This tool can be used to normalize a LiDAR point cloud. A normalized point cloud is one for which the point z-values represent height above the ground surface rather than raw elevation values. Thus, a point that falls on the ground surface will have a z-value of zero and vegetation points, and points associated with other off-terrain objects, have positive, non-zero z-values. Point cloud normalization is an essential pre-processing method for many forms of LiDAR data analysis, including the characterization of many forestry related metrics and individual tree mapping (individual_tree_detection).

This tool works by measuring the elevation difference of each point in an input LiDAR file (input) and the elevation of an input raster digital terrain model (dtm). A DTM is a bare-earth digital elevation model. Typically, the input DTM is creating using the same input LiDAR data by interpolating the ground surface using only ground-classified points. If the LiDAR point cloud does not contain ground-point classifications, you may wish to apply the lidar_ground_point_filter or classify_lidartools before interpolating the DTM. While ground-point classification works well to identify the ground surface beneath vegetation cover, building points are sometimes left It may also be necessary to remove other off-terrain objects like buildings. The remove_off_terrain_objects tool can be useful for this purpose, creating a final bare-earth DTM. This tool outputs a normalized LiDAR point cloud (output). If the no_negatives parameter is True, any points that fall beneath the surface elevation defined by the DTM, will have their z-value set to zero.

Note that the lidar_tophat_transform tool similarly can be used to produce a type of normalized point cloud, although it does not require an input raster DTM. Rather, it attempts to model the ground surface within the point cloud by identifying the lowest points within local neighbourhoods surrounding each point in the cloud. While this approach can produce satisfactory results in some cases, the normalize_lidar tool likely works better under more rugged topography and in areas with extensive building coverage, and provides greater control over the definition of the ground surface.

See Also

lidar_tophat_transform, individual_tree_detection, lidar_ground_point_filter, classify_lidar

Function Signature

def normalize_lidar(self, input_lidar: Lidar, dtm: Raster) -> Lidar: ...

normalized_difference_index

This tool can be used to calculate a normalized difference index (NDI) from two bands of multispectral image data. A NDI of two band images (image1 and image2) takes the general form:

NDI = (image1 - image2) / (image1 + image2 + c)

Where c is a correction factor sometimes used to avoid division by zero. It is, however, often set to 0.0. In fact, the normalized_difference_index tool will set all pixels where image1 + image2 = 0 to 0.0 in the output image. While this is not strictly mathematically correct (0 / 0 = infinity), it is often the intended output in these cases.

NDIs generally takes the value range -1.0 to 1.0, although in practice the range of values for a particular image scene may be more restricted than this.

NDIs have two important properties that make them particularly useful for remote sensing applications. First, they emphasize certain aspects of the shape of the spectral signatures of different land covers. Secondly, they can be used to de-emphasize the effects of variable illumination within a scene. NDIs are therefore frequently used in the field of remote sensing to create vegetation indices and other indices for emphasizing various land-covers and as inputs to analytical operations like image classification. For example, the normalized difference vegetation index (NDVI), one of the most common image-derived products in remote sensing, is calculated as:

NDVI = (NIR - RED) / (NIR + RED)

The optimal soil adjusted vegetation index (OSAVI) is:

OSAVI = (NIR - RED) / (NIR + RED + 0.16)

The normalized difference water index (NDWI), or normalized difference moisture index (NDMI), is:

NDWI = (NIR - SWIR) / (NIR + SWIR)

The normalized burn ratio 1 (NBR1) and normalized burn ration 2 (NBR2) are:

NBR1 = (NIR - SWIR2) / (NIR + SWIR2)

NBR2 = (SWIR1 - SWIR2) / (SWIR1 + SWIR2)

In addition to NDIs, Simple Ratios of image bands, are also commonly used as inputs to other remote sensing applications like image classification. Simple ratios can be calculated using the Divide tool. Division by zero, in this case, will result in an output NoData value.

See Also

Divide

Function Signature

def normalized_difference_index(self, nir_image: Raster, red_image: Raster, clip_percent: float = 0.0, correction_value: float = 0.0) -> Raster: ...

num_downslope_neighbours

This tool calculates the number of downslope neighbours of each grid cell in a raster digital elevation model (DEM). The user must input a DEM (dem). The tool examines the eight neighbouring cells for each grid cell in a the DEM and counts the number of neighbours with an elevation less than the centre cell of the 3 x 3 window. The output image can therefore have values raning from 0 to 8. A raster grid cell with eight downslope neighbours is a peak and a cell with zero downslope neighbours is a pit. This tool can be used with the NumUpslopeNeighbours tool to assess the degree of local flow divergence/convergence.

See Also

NumUpslopeNeighbours

Function Signature

def num_downslope_neighbours(self, dem: Raster) -> Raster: ...

num_inflowing_neighbours

This tool calculates the number of inflowing neighbours for each grid cell in a raster file. The user must specify the names of an input digital elevation model (DEM) file (dem) and the output raster file (output). The tool calculates the D8 pointer file internally in order to identify inflowing neighbouring cells.

Grid cells in the input DEM that contain the NoData value will be assigned the NoData value in the output image. The output image is of the integer data type and continuous data scale.

See Also

num_downslope_neighbours, NumUpslopeNeighbours

Function Signature

def num_inflowing_neighbours(self, dem: Raster) -> Raster: ...

olympic_filter

This filter is a modification of the mean_filter, whereby the highest and lowest values in the kernel are dropped, and the remaining values are averaged to replace the central pixel. The result is a low-pass smoothing filter that is more robust than the mean_filter, which is more strongly impacted by the presence of outlier values. It is named after a system of scoring Olympic events.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

mean_filter

Function Signature

def olympic_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

opening

This tool performs an opening operation on an input greyscale image (input). An opening is a mathematical morphology operation involving a dilation (maximum filter) on an erosion (minimum filter) set. opening operations, together with the closing operation, is frequently used in the fields of computer vision and digital image processing for image noise removal. The user must specify the size of the moving window in both the x and y directions (filterx and filtery).

See Also

closing, tophat_transform

Function Signature

def opening(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

otsu_thresholding

This tool uses Ostu's method for optimal automatic binary thresholding, transforming an input image (input) into background and foreground pixels (output). Otsu’s method uses the grayscale image histogram to detect an optimal threshold value that separates two regions with maximum inter-class variance. The process begins by calculating the image histogram of the input.

References

Otsu, N., 1979. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1), pp.62-66.

See Also

image_segmentation, image_segmentation

Function Signature

def otsu_thresholding(self, raster: Raster) -> Raster: ...

paired_sample_t_test

This tool will perform a paired-sample t-test to evaluate whether a significant statistical difference exists between the two rasters. The null hypothesis is that the difference between the paired population means is equal to zero. The paired-samples t-test makes an assumption that the differences between related samples follows a Gaussian distribution. The tool will output a cumulative probability distribution, with a fitted Gaussian, to help users evaluate whether this assumption is violated by the data. If this is the case, the wilcoxon_signed_rank_test should be used instead.

The user must specify the name of the two input raster images (input1 and input2) and the output report HTML file (output). The test can be performed optionally on the entire image or on a random sub-sample of pixel values of a user-specified size (num_samples). In evaluating the significance of the test, it is important to keep in mind that given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant. Furthermore statistical significance says nothing about the practical significance of a difference.

See Also

two_sample_ks_test, wilcoxon_signed_rank_test

Function Signature

def paired_sample_t_test(self, raster1: Raster, raster2: Raster, output_html_file: str, num_samples: int) -> None: ...

panchromatic_sharpening

Panchromatic sharpening, or simply pan-sharpening, refers to a range of techniques that can be used to merge finer spatial resolution panchromatic images with coarser spatial resolution multi-spectral images. The multi-spectral data provides colour information while the panchromatic image provides improved spatial information. This procedure is sometimes called image fusion. Jensen (2015) describes panchromatic sharpening in detail.

Whitebox provides two common methods for panchromatic sharpening including the Brovey transformation and the Intensity-Hue-Saturation (IHS) methods. Both of these techniques provide the best results when the range of wavelengths detected by the panchromatic image overlap significantly with the wavelength range covered by the three multi-spectral bands that are used. When this is not the case, the resulting colour composite will likely have colour properties that are dissimilar to the colour composite generated by the original multispectral images. For Landsat ETM+ data, the panchromatic band is sensitive to EMR in the range of 0.52-0.90 micrometres. This corresponds closely to the green (band 2), red (band 3), and near-infrared (band 4).

Reference

Jensen, J. R. (2015). Introductory Digital Image Processing: A Remote Sensing Perspective.

See Also

create_colour_composite

Function Signature

def panchromatic_sharpening(self, pan: Raster, colour_composite: Raster, red: Raster, green: Raster, blue: Raster, fusion_method: str = "brovey") -> Raster: ...

patch_orientation

This tool calculates the orientation of polygon features based on the slope of a reduced major axis (RMA) regression line. The regression analysis use the vertices of the exterior hull nodes of a vector polygon. The only required input is the name of the vector polygon file. The orientation values, measured in degrees from north, will be placed in the accompanying attribute table as a new field (ORIENT). The value of the orientation measure for any polygon will depend on how elongated the feature is.

Note that the output values are polygon orientations and not true directions. While directions may take values ranging from 0-360, orientation is expressed as an angle between 0 and 180 degrees clockwise from north. Lastly, the orientation measure may become unstable when polygons are oriented nearly vertical or horizontal.

See Also

linearity_index, elongation_ratio

Function Signature

def patch_orientation(self, input: Vector) -> Vector: ...

pennock_landform_classification

Tool can be used to perform a simple landform classification based on measures of slope gradient and curvature derived from a user-specified digital elevation model (DEM). The classification scheme is based on the method proposed by Pennock, Zebarth, and DeJong (1987). The scheme divides a landscape into seven element types, including: convergent footslopes (CFS), divergent footslopes (DFS), convergent shoulders (CSH), divergent shoulders (DSH), convergent backslopes (CBS), divergent backslopes (DBS), and level terrain (L). The output raster image will record each of these base element types as:

Element TypeCode
CFS1
DFS2
CSH3
DSH4
CBS5
DBS6
L7

The definition of each of the elements, based on the original Pennock et al. (1987) paper, is as follows:

PROFILEGRADIENTPLANElement
Concave ( -0.10)High >3.0Concave 0.0CFS
Concave ( -0.10)High >3.0Convex >0.0DFS
Convex (>0.10)High >3.0Concave 0.0CSH
Convex (>0.10)High >3.0Convex >0.0DSH
Linear (-0.10...0.10)High >3.0Concave 0.0CBS
Linear (-0.10...0.10)High >3.0Convex >0.0DBS
--Low 3.0--L

Where PROFILE is profile curvature, GRADIENT is the slope gradient, and PLAN is the plan curvature. Note that these values are likely landscape and data specific and can be adjusted by the user. Landscape classification schemes that are based on terrain attributes are highly sensitive to short-range topographic variability (i.e. roughness) and can benefit from pre-processing the DEM with a smoothing filter to reduce the effect of surface roughness and emphasize the longer-range topographic signal. The feature_preserving_smoothing tool offers excellent performance in smoothing DEMs without removing the sharpness of breaks-in-slope.

Reference

Pennock, D.J., Zebarth, B.J., and DeJong, E. (1987) Landform classification and soil distribution in hummocky terrain, Saskatchewan, Canada. Geoderma, 40: 297-315.

See Also

feature_preserving_smoothing

Function Signature

def pennock_landform_classification(self, dem: Raster, slope_threshold: float = 3.0, prof_curv_threshold: float = 0.1, plan_curv_threshold: float = 0.0, z_factor: float = 1.0) -> Tuple[Raster, str]: ...

percent_elev_range

Percent elevation range (PER) is a measure of local topographic position (LTP). It expresses the vertical position for a digital elevation model (DEM) grid cell (z0) as the percentage of the elevation range within the neighbourhood filter window, such that:

PER = z0 / (zmax - zmin) x 100

where z0 is the elevation of the window's center grid cell, zmax is the maximum neighbouring elevation, and zmin is the minimum neighbouring elevation.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filteryflags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

Compared with ElevPercentile and DevFromMeanElev, PER is a less robust measure of LTP that is susceptible to outliers in neighbouring elevations (e.g. the presence of off-terrain objects in the DEM).

References

Newman, D. R., Lindsay, J. B., and Cockburn, J. M. H. (2018). Evaluating metrics of local topographic position for multiscale geomorphometric analysis. Geomorphology, 312, 40-50.

See Also

ElevPercentile, DevFromMeanElev, DiffFromMeanElev, relative_topographic_position

Function Signature

def percent_elev_range(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

percent_equal_to

This tool calculates the percentage of a raster stack (inputs) that have cell values equal to an input comparison raster. The user must specify the name of the value raster (comparison), the names of the raster files contained in the stack, and an output raster file name (output). The tool, working on a cell-by-cell basis, will count the number of rasters within the stack that have the same grid cell value as the corresponding grid cell in the comparison raster. This count is then expressed as a percentage of the number of rasters contained within the stack and output. If any of the rasters within the stack contain the NoData value, the corresponding grid cell in the output raster will be assigned NoData.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

percent_greater_than, percent_less_than

Function Signature

def percent_equal_to(self, input_rasters: List[Raster], comparison: Raster) -> Raster: ...

percent_greater_than

This tool calculates the percentage of a raster stack (inputs) that have cell values greater than an input comparison raster. The user must specify the name of the value raster (comparison), the names of the raster files contained in the stack, and an output raster file name (output). The tool, working on a cell-by-cell basis, will count the number of rasters within the stack with larger grid cell values greater than the corresponding grid cell in the comparison raster. This count is then expressed as a percentage of the number of rasters contained within the stack and output. If any of the rasters within the stack contain the NoData value, the corresponding grid cell in the output raster will be assigned NoData.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

percent_less_than, percent_equal_to

Function Signature

def percent_greater_than(self, input_rasters: List[Raster], comparison: Raster) -> Raster: ...

percent_less_than

This tool calculates the percentage of a raster stack (inputs) that have cell values less than an input comparison raster. The user must specify the name of the value raster (comparison), the names of the raster files contained in the stack, and an output raster file name (output). The tool, working on a cell-by-cell basis, will count the number of rasters within the stack with larger grid cell values less than the corresponding grid cell in the comparison raster. This count is then expressed as a percentage of the number of rasters contained within the stack and output. If any of the rasters within the stack contain the NoData value, the corresponding grid cell in the output raster will be assigned NoData.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

percent_greater_than, percent_equal_to

Function Signature

def percent_less_than(self, input_rasters: List[Raster], comparison: Raster) -> Raster: ...

percentage_contrast_stretch

This tool performs a percentage contrast stretch on a raster image. This operation maps each grid cell value in the input raster image (zin) onto a new scale that ranges from a lower-tail clip value (min_val) to the upper-tail clip value (max_val), with the user-specified number of tonal values (num_tones), such that:

zout = ((zin – min_val)/(max_val – min_val)) x num_tones

where zout is the output value. The values of min_val and max_val are determined from the frequency distribution and the user-specified tail clip value (clip). For example, if a value of 1% is specified, the tool will determine the values in the input image for which 1% of the grid cells have a lower value min_val and 1% of the grid cells have a higher value max_val. The user must also specify which tails (upper, lower, or both) to clip (tail).

This is a type of linear contrast stretch with saturation at the tails of the frequency distribution. This is the same kind of stretch that is used to display raster type data on the fly in many GIS software packages, such that the lower and upper tail values are set using the minimum and maximum display values and the number of tonal values is determined by the number of palette entries.

See Also

piecewise_contrast_stretch, gaussian_contrast_stretch, histogram_equalization, min_max_contrast_stretch, sigmoidal_contrast_stretch, standard_deviation_contrast_stretch

Function Signature

def percentage_contrast_stretch(self, raster: Raster, clip: float = 1.0, tail: str = "both", num_tones: int = 256) -> Raster: ...

percentile_filter

This tool calculates the percentile of the center cell in a moving filter window applied to an input image (`input). This indicates the value below which a given percentage of the neighbouring values in within the filter fall. For example, the 35th percentile is the value below which 35% of the neighbouring values in the filter window may be found. As such, the percentile of a pixel value is indicative of the relative location of the site within the statistical distribution of values contained within a filter window. When applied to input digital elevation models, percentile is a measure of local topographic position, or elevation residual.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values, e.g. 3, 5, 7, 9... If the kernel filter size is the same in the x and y dimensions, the silent filter flag may be used instead (command-line interface only).

This tool takes advantage of the redundancy between overlapping, neighbouring filters to enhance computationally efficiency, using a method similar to Huang et al. (1979). This efficient method of calculating percentiles requires rounding of floating-point inputs, and therefore the user must specify the number of significant digits (sig_digits) to be used during the processing. Like most of WhiteboxTools' filters, this tool is also parallelized for further efficiency.

Reference

Huang, T., Yang, G.J.T.G.Y. and Tang, G., 1979. A fast two-dimensional median filtering algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(1), pp.13-18.

See Also

median_filter

Function Signature

def percentile_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, sig_digits: int = 2) -> Raster: ...

perimeter_area_ratio

The perimeter-area ratio is an indicator of polygon shape complexity. Unlike some other shape parameters (e.g. shape complexity index), perimeter-area ratio does not standardize to a simple Euclidean shape. Although widely used for landscape analysis, perimeter-area ratio exhibits the undesirable property of polygon size dependence (Mcgarigal et al. 2002). That is, holding shape constant, an increase in polygon size will cause a decrease in the perimeter-area ratio. The perimeter-area ratio is the inverse of the compactness ratio.

The output data will be displayed as a new field (P_A_RATIO) in the input vector's database file.

Function Signature

def perimeter_area_ratio(self, input: Vector) -> Vector: ...

pick_from_list

This tool outputs the cell value from a raster stack specified (inputs) by a position raster (pos_input). The user must specify the name of the position raster, the names of the raster files contained in the stack (i.e. group of rasters), and an output raster file name (output). The tool, working on a cell-by-cell basis, will assign the value to the output grid cell contained in the corresponding cell in the stack image in the position specified by the cell value in the position raster. Importantly, the positions raster should be in zero-based order. That is, the first image in the stack should be assigned the value zero, the second raster is assigned 1, and so on.

At least two input rasters are required to run this tool. Each of the input rasters must share the same number of rows and columns and spatial extent. An error will be issued if this is not the case.

See Also

count_if

Function Signature

def pick_from_list(self, input_rasters: List[Raster], pos_input: Raster) -> Raster: ...

plan_curvature

This tool calculates the plan curvature (i.e. contour curvature), or the rate of change in aspect along a contour line, from a digital elevation model (DEM). Curvature is the second derivative of the topographic surface defined by a DEM. Plan curvature characterizes the degree of flow convergence or divergence within the landscape (Gallant and Wilson, 2000). The user must input a DEM (dem). WhiteboxTools reports curvature in degrees multiplied by 100 for easier interpretation. The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of the raster, in radians.

The algorithm uses the same formula for the calculation of plan curvature as Gallant and Wilson (2000). Plan curvature is negative for diverging flow along ridges and positive for convergent areas, e.g. along valley bottoms.

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

See Also

profile_curvature, tangential_curvature, total_curvature, slope, aspect

Function Signature

def plan_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

polygon_area

This tool calculates the area of vector polygons, adding the result to the vector's attribute table (AREA field). The area calculation will account for any holes contained within polygons. The vector should be in a projected coordinate system.

To calculate the area of raster polygons, use the raster_area tool instead.

See Also

raster_area

Function Signature

def polygon_area(self, input: Vector) -> Vector: ...

polygon_long_axis

This tool can be used to map the long axis of polygon features. The long axis is the longer of the two primary axes of the minimum bounding box (MBB), i.e. the smallest box to completely enclose a feature. The long axis is drawn for each polygon in the input vector file such that it passes through the centre point of the MBB. The output file is therefore a vector of simple two-point polylines forming a vector field.

Function Signature

def polygon_long_axis(self, input: Vector) -> Vector: ...

polygon_perimeter

This tool calculates the perimeter of vector polygons, adding the result to the vector's attribute table (PERIMETER field). The area calculation will account for any holes contained within polygons. The vector should be in a a projected coordinate system.

Function Signature

def polygon_perimeter(self, input: Vector) -> Vector: ...

polygon_short_axis

This tool can be used to map the short axis of polygon features. The short axis is the shorter of the two primary axes of the minimum bounding box (MBB), i.e. the smallest box to completely enclose a feature. The short axis is drawn for each polygon in the input vector file such that it passes through the centre point of the MBB. The output file is therefore a vector of simple two-point polylines forming a vector field.

Function Signature

def polygon_short_axis(self, input: Vector) -> Vector: ...

polygonize

This tool outputs a vector polygon layer from two or more intersecting line features contained in one or more input vector line files. Each space enclosed by the intersecting line set is converted to polygon added to the output layer. This tool should not be confused with the lines_to_polygons tool, which can be used to convert a vector file of polylines into a set of polygons, simply by closing each line feature. The lines_to_polygons tool does not deal with line intersection in the same way that the polygonize tool does.

See Also

lines_to_polygons

Function Signature

def polygonize(self, input_layers: List[Vector]) -> Vector: ...

polygons_to_lines

This tool converts vector polygons into polylines, simply by modifying the Shapefile geometry type.

See Also

lines_to_polygons

Function Signature

def polygons_to_lines(self, input: Vector) -> Vector: ...

prewitt_filter

This tool performs a 3 × 3 Prewitt edge-detection filter on a raster image. The Prewitt filter is similar to the sobel_filter, in that it identifies areas of high slope in the input image through the calculation of slopes in the x and y directions. The Prewitt edge-detection filter, however, gives less weight to nearer cell values within the moving window, or kernel. For example, a Prewitt filter uses the following schemes to calculate x and y slopes:

X-direction slope

...
-101
-101
-101

Y-direction slope

...
111
000
-1-1-1

Each grid cell in the output image is assigned the square-root of the squared sum of the x and y slopes.

The user may optionally clip the output image distribution tails by a specified amount (e.g. 1%).

See Also

sobel_filter

Function Signature

def prewitt_filter(self, raster: Raster, clip_tails: float = 0.0) -> Raster: ...

principal_component_analysis

Principal component analysis (PCA) is a common data reduction technique that is used to reduce the dimensionality of multi-dimensional space. In the field of remote sensing, PCA is often used to reduce the number of bands of multi-spectral, or hyper-spectral, imagery. Image correlation analysis often reveals a substantial level of correlation among bands of multi-spectral imagery. This correlation represents data redundancy, i.e. fewer images than the number of bands are required to represent the same information, where the information is related to variation within the imagery. PCA transforms the original data set of n bands into n 'component' images, where each component image is uncorrelated with all other components. The technique works by transforming the axes of the multi-spectral space such that it coincides with the directions of greatest correlation. Each of these new axes are orthogonal to one another, i.e. they are at right angles. PCA is therefore a type of coordinate system transformation. The PCA component images are arranged such that the greatest amount of variance (or information) within the original data set, is contained within the first component and the amount of variance decreases with each component. It is often the case that the majority of the information contained in a multi-spectral data set can be represented by the first three or four PCA components. The higher-order components are often associated with noise in the original data set.

The user must specify the names of the multiple input images (inputs). Additionally, the user must specify whether to perform a standardized PCA (standardized) and the number of output components (num_comp) to generate (all components will be output unless otherwise specified). A standardized PCA is performed using the correlation matrix rather than the variance-covariance matrix. This is appropriate when the variances in the input images differ substantially, such as would be the case if they contained values that were recorded in different units (e.g. feet and meters) or on different scales (e.g. 8-bit vs. 16 bit).

Several outputs will be generated when the tool has completed. The PCA report will be embedded within an output (output) HTML file, which should be automatically displayed after the tool has completed. This report contains useful data summarizing the results of the PCA, including the explained variances of each factor, the Eigenvalues and Eigenvectors associated with factors, the factor loadings, and a scree plot. The first table that is in the PCA report lists the amount of explained variance (in non-cumulative and cumulative form), the Eigenvalue, and the Eigenvector for each component. Each of the PCA components refer to the newly created, transformed images that are created by running the tool. The amount of explained variance associated with each component can be thought of as a measure of how much information content within the original multi-spectral data set that a component has. The higher this value is, the more important the component is. This same information is presented in graphical form in the scree plot, found at the bottom of the PCA report. The Eigenvalue is another measure of the information content of a component and the eigenvector describes the mathematical transformation (rotation coordinates) that correspond to a particular component image.

Factor loadings are also output in a table within the PCA text report (second table). These loading values describe the correlation (i.e. r values) between each of the PCA components (columns) and the original images (rows). These values show you how the information contained in an image is spread among the components. An analysis of factor loadings can be reveal useful information about the data set. For example, it can help to identify groups of similar images.

PCA is used to reduce the number of band images necessary for classification (i.e. as a data reduction technique), for noise reduction, and for change detection applications. When used as a change detection technique, the major PCA components tend to be associated with stable elements of the data set while variance due to land-cover change tend to manifest in the high-order, 'change components'. When used as a noise reduction technique, an inverse PCA is generally performed, leaving out one or more of the high-order PCA components, which account for noise variance.

Note: the current implementation reads every raster into memory at one time. This is because of the calculation of the co-variances. As such, if the entire image stack cannot fit in memory, the tool will likely experience an out-of-memory error. This tool should be run using the wd flag to specify the working directory into which the component images will be written.

Function Signature

def principal_component_analysis(self, rasters: List[Raster], output_html_file: str, num_components: int = 2, standardized: bool = False) -> List[Raster]: ...

print_geotiff_tags

This tool can be used to view the tags contained within a GeoTiff file. Viewing the tags of a GeoTiff file can be useful when trying to import the GeoTiff to different software environments. The user must specify the name of a GeoTiff file and the tag information will be output to the StdOut output stream (e.g. console). Note that tags that contain greater than 100 values will be truncated in the output. GeoKeys will also be interpreted as per the GeoTIFF specification.

Function Signature

def print_geotiff_tags(self, file_name: str) : ...

profile

This tool can be used to plot the data profile, along a set of one or more vector lines (lines), in an input (surface) digital elevation model (DEM), or other surface model. The data profile plots surface height (y-axis) against distance along profile (x-axis). The tool outputs an interactive SVG line graph embedded in an HTML document (output). If the vector lines file contains multiple line features, the output plot will contain each of the input profiles.

If you want to extract the longitudinal profile of a river, use the long_profile tool instead.

See Also

long_profile, hypsometric_analysis

Function Signature

def profile(self, lines_vector: Vector, surface: Raster, output_html_file: str) -> None: ...

profile_curvature

This tool calculates the profile curvature, or the rate of change in slope along a flow line, from a digital elevation model (DEM). Curvature is the second derivative of the topographic surface defined by a DEM. Profile curvature characterizes the degree of downslope acceleration or deceleration within the landscape (Gallant and Wilson, 2000). The user must input DEM a (dem). WhiteboxTools reports curvature in degrees multiplied by 100 for easier interpretation because curvature values are typically very small. The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of the raster, in radians.

The algorithm uses the same formula for the calculation of plan curvature as Gallant and Wilson (2000). Profile curvature is negative for slope increasing downhill (convex flow profile, typical of upper slopes) and positive for slope decreasing downhill (concave, typical of lower slopes).

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

See Also

profile_curvature, tangential_curvature, total_curvature, slope, aspect

Function Signature

def profile_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

qin_flow_accumulation

This tool is used to generate a flow accumulation grid (i.e. contributing area) using the Qin et al. (2007) flow algorithm, not to be confused with the similarly named quinn_flow_accumulation tool. This algorithm is an examples of a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed to more than one downslope neighbour, i.e. flow divergence is permitted. It is based on a modification of the Freeman (1991; FD8FlowAccumulation) and Quinn et al. (1995; quinn_flow_accumulation) methods. The Qin method relates the degree of flow dispersion from a grid cell to the local maximum downslope gradient. Specifically, steeper terrain experiences more convergent flow while flatter slopes experience more flow divergence.

The following equations are used to calculate the portion flow (Fi) given to each neighbour, i:

Fi = Li(tanβ)f(e) / Σi=1n[Li(tanβ)f(e)]

f(e) = min(e, eU) / eU × (pU - 1.1) + 1.1

Where Li is the contour length, and is 0.5×cell size for cardinal directions and 0.354×cell size for diagonal directions, n = 8, and represents each of the eight neighbouring grid cells. The exponent f(e) controls the proportion of flow allocated to each downslope neighbour of a grid cell, based on the local maximum downslope gradient (e), and the user-specified upper boundary of e (eU; max_slope), and the upper boundary of the exponent (pU; exponent), f(e). Note that the original Qin (2007) implementation allowed for user-specified lower boundaries on the slope (eL) and exponent (pL) parameters as well. In this implementation, these parameters are assumed to be 0.0 and 1.1 respectively, and are not user adjustable. Also note, the exponent parameter should be less than 50.0, as higher values may cause numerical instability.

The user must specify the name (dem) of the input digital elevation model (DEM) and the output file (output). The DEM must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost (also breach_depressions_least_cost) or fill_depressions tool.

The user-specified non-dispersive, channel initiation threshold (threshold) is a flow-accumulation value (measured in upslope grid cells, which is directly proportional to area) above which flow dispersion is no longer permitted. Grid cells with flow-accumulation values above this area threshold will have their flow routed in a manner that is similar to the D8 single-flow-direction algorithm, directing all flow towards the steepest downslope neighbour. This is usually done under the assumption that flow dispersion, whilst appropriate on hillslope areas, is not realistic once flow becomes channelized. Importantly, the threshold parameter sets the spatial extent of the stream network, with lower values resulting in more extensive networks.

In addition to the input DEM, output file (output), and exponent, the user must also specify the output type (out_type). The output flow-accumulation can be: 1) cells (i.e. the number of inflowing grid cells), catchment area (i.e. the upslope area), or specific contributing area (i.e. the catchment area divided by the flow width). The default value is specific contributing area. The user must also specify whether the output flow-accumulation grid should be log-tranformed (log), i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated flow value. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index (wetness_index), or relative stream power index (StreamPowerIndex).

Reference

Freeman, T. G. (1991). Calculating catchment area with divergent flow based on a regular grid. Computers and Geosciences, 17(3), 413-422.

Qin, C., Zhu, A. X., Pei, T., Li, B., Zhou, C., & Yang, L. 2007. An adaptive approach to selecting a flow‐partition exponent for a multiple‐flow‐direction algorithm. International Journal of Geographical Information Science, 21(4), 443-458.

Quinn, P. F., K. J. Beven, Lamb, R. 1995. The in (a/tanβ) index: How to calculate it and how to use it within the topmodel framework. Hydrological Processes 9(2): 161-182.

See Also

D8FlowAccumulation, quinn_flow_accumulation, FD8FlowAccumulation, DInfFlowAccumulation, MDInfFlowAccumulation, rho8_pointer, wetness_index

Function Signature

def qin_flow_accumulation(self, dem: Raster, out_type: str = "sca", exponent: float = 10.0, max_slope: float = 45.0, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

quantiles

This tool transforms values in an input raster (input) into quantiles. In statistics, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in a same way. There is one fewer quantile than the number of groups created. Thus quartiles are the three cut points that will divide a dataset into four equal-sized groups. Common quantiles have special names: for instance quartile (4-quantile), quintiles (5-quantiles), decile (10-quantile), percentile (100-quantile).

The user must specify the desired number of quantiles, q (num_quantiles), in the output raster (output). The output raster will contain q equal-sized groups with values 1 to q, indicating which quantile group each grid cell belongs to.

See Also

histogram_equalization

Function Signature

def quantiles(self, raster: Raster, num_quantiles: int = 5) -> Raster: ...

quinn_flow_accumulation

This tool is used to generate a flow accumulation grid (i.e. contributing area) using the Quinn et al. (1995) flow algorithm, sometimes called QMFD or QMFD2, and not to be confused with the similarly named qin_flow_accumulation tool. This algorithm is an examples of a multiple-flow-direction (MFD) method because the flow entering each grid cell is routed to more than one downslope neighbour, i.e. flow divergence is permitted. The user must specify the name (dem) of the input digital elevation model (DEM). The DEM must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost (also breach_depressions_least_cost) or fill_depressions tool. A value must also be specified for the exponent parameter (exponent), a number that controls the degree of dispersion in the resulting flow-accumulation grid. A lower value yields greater apparent flow dispersion across divergent hillslopes. The exponent value (h) should probably be less than 50.0, as higher values may cause numerical instability, and values between 1 and 2 are most common. The following equations are used to calculate the portion flow (Fi) given to each neighbour, i:

Fi = Li(tanβ)p / Σi=1n[Li(tanβ)p]

p = (A / threshold + 1)h

Where Li is the contour length, and is 0.5×cell size for cardinal directions and 0.354×cell size for diagonal directions, n = 8, and represents each of the eight neighbouring grid cells, and, A is the flow accumulation value assigned to the current grid cell, that is being apportioned downslope. The non-dispersive, channel initiation threshold (threshold) is a flow-accumulation value (measured in upslope grid cells, which is directly proportional to area) above which flow dispersion is no longer permitted. Grid cells with flow-accumulation values above this threshold will have their flow routed in a manner that is similar to the D8 single-flow-direction algorithm, directing all flow towards the steepest downslope neighbour. This is usually done under the assumption that flow dispersion, whilst appropriate on hillslope areas, is not realistic once flow becomes channelized. Importantly, the threshold parameter sets the spatial extent of the stream network, with lower values resulting in more extensive networks.

In addition to the input DEM, output file (output), and exponent, the user must also specify the output type (out_type). The output flow-accumulation can be: 1) cells (i.e. the number of inflowing grid cells), catchment area (i.e. the upslope area), or specific contributing area (i.e. the catchment area divided by the flow width). The default value is specific contributing area. The user must also specify whether the output flow-accumulation grid should be log-transformed (log), i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated flow value. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index (wetness_index), or relative stream power index (StreamPowerIndex). The Quinn et al. (1995) algorithm is commonly used to calculate wetness index.

Reference

Quinn, P. F., K. J. Beven, Lamb, R. 1995. The in (a/tanβ) index: How to calculate it and how to use it within the topmodel framework. Hydrological Processes 9(2): 161-182.

See Also

D8FlowAccumulation, qin_flow_accumulation, FD8FlowAccumulation, DInfFlowAccumulation, MDInfFlowAccumulation, rho8_pointer, wetness_index

Function Signature

def quinn_flow_accumulation(self, dem: Raster, out_type: str = "sca", exponent: float = 1.1, convergence_threshold: float = float('inf'), log_transform: bool = False, clip: bool = False) -> Raster: ...

radial_basis_function_interpolation

This tool interpolates vector points into a raster surface using a radial basis function (RBF) scheme.

Function Signature

def radial_basis_function_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, radius: float = 0.0, min_points: int = 0, cell_size: float = 0.0, base_raster: Raster = None, func_type: str = "thinplatespline", poly_order: str = "none", weight: float = 0.1) -> Raster: ...

radius_of_gyration

This can be used to calculate the radius of gyration (RoG) for the polygon features within a raster image. RoG measures how far across the landscape a polygon extends its reach on average, given by the mean distance between cells in a patch (Mcgarigal et al. 2002). The radius of gyration can be considered a measure of the average distance an organism can move within a patch before encountering the patch boundary from a random starting point (Mcgarigal et al. 2002). The input raster grid should contain polygons with unique identifiers greater than zero. The user must also specify the name of the output raster file (where the radius of gyration will be assigned to each feature in the input file) and the specified option of outputting text data.

Function Signature

def radius_of_gyration(self, raster: Raster) -> Tuple[Raster, str]: ...

raise_walls

This tool is used to increment the elevations in a digital elevation model (DEM) along the boundaries of a vector lines or polygon layer. The user must specify the name of the raster DEM (dem), the vector file (input), the output file name (output), the increment height (height), and an optional breach lines vector layer (breach). The breach lines layer can be used to breach a whole in the raised walls at intersections with the wall layer.

Function Signature

def raise_walls(self, dem: Raster, walls: Vector, breach_lines: Vector, wall_height: float = 100.0) -> Raster: ...

random_field

This tool can be used to a raster image filled with random values drawn from a standard normal distribution. The values range from approximately -4.0 to 4.0, with a mean of 0 and a standard deviation of 1.0. The dimensions and georeferencing of the output random field (output) are based on an existing, user-specified raster grid (base). Note that the output field will not possess any spatial autocorrelation. If spatially autocorrelated random fields are desired, the turning_bands_simulation tool is more appropriate, or alternatively, the fast_almost_gaussian_filter tool may be used to force spatial autocorrelation onto the distribution of the random_field tool.

See Also

turning_bands_simulation, fast_almost_gaussian_filter

Function Signature

def random_field(self, base_raster: Raster = None) -> Raster: ...

random_sample

This tool can be used to create a random sample of grid cells. The user specifies the base raster file, which is used to determine the grid dimensions and georeference information for the output raster, and the number of sample random samples (n). The output grid will contain n non-zero grid cells, randomly distributed throughout the raster grid, and a background value of zero. This tool is useful when performing statistical analyses on raster images when you wish to obtain a random sample of data.

Only valid, non-nodata, cells in the base raster will be sampled.

Function Signature

def random_sample(self, base_raster: Raster = None, num_samples: int = 1000) -> Raster: ...

range_filter

This tool performs a range filter on an input image (input). A range filter assigns to each cell in the output grid the range (maximum - minimum) of the values contained within a moving window centred on each grid cell.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

total_filter

Function Signature

def range_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

raster_area

This tools estimates the area of each category, polygon, or patch in an input raster. The input raster must be categorical in data scale. Rasters with floating-point cell values are not good candidates for an area analysis. The user must specify whether the output is given in grid cells or map units (units). Map Units are physical units, e.g. if the rasters's scale is in metres, areas will report in square-metres. Notice that square-metres can be converted into hectares by dividing by 10,000 and into square-kilometres by dividing by 1,000,000. If the input raster is in geographic coordinates (i.e. latitude and longitude) a warning will be issued and areas will be estimated based on per-row calculated degree lengths.

The tool can be run with a raster output (output), a text output (out_text), or both. If niether outputs are specified, the tool will automatically output a raster named area.tif.

Zero values in the input raster may be excluded from the area analysis if the zero_back flag is used.

To calculate the area of vector polygons, use the polygon_area tool instead.

See Also

polygon_area, raster_histogram

Function Signature

def raster_area(self, raster: Raster, units: str = "map units", zero_background: bool = False) -> Tuple[Raster, str]: ...

raster_calculator

The raster_calculator tool can be used to perform a complex mathematical operations on one or more input raster images on a cell-to-cell basis. The user inputs an expression and a list of input rasters (input_rasters), specified in the same order as the rasters contained within the statement. Rasters are treated like variables (that change value with each grid cell) and are specified within the statement as arbitrarily named variables contained within either double or single quotation marks (e.g. "DEM" > 500.0). The order of raster variables must match the order of rasters within the input_rasters list.**Note, all input rasters must share the same number of rows and columns and spatial extent. Use the resample tool if this is not the case to convert the one raster's grid resolution to the others.

Example

(band3, band4) = wbe.read_rasters('band3.tif', 'band4.tif')
result = wbe.raster_calculator("('nir' - 'red') / ('nir' + 'red')", [band4, band3])
wbe.write_raster(result, 'result.tif', True)

The mathematical expression supports all of the standard algebraic unary and binary operators (+ - * / ^ %), as well as comparisons (< <= == != >= >) and logical operators (&& ||) with short-circuit support. The order of operations, from highest to lowest is as follows.

Listed in order of precedence:

OrderSymbolDescription
(Highest Precedence)^Exponentiation
%Modulo
/Division
*Multiplication
-Subtraction
+Addition
== != < <= >= >Comparisons (all have equal precedence)
&& andLogical AND with short-circuit
(Lowest Precedence)|| orLogical OR with short-circuit

Several common mathematical functions are also available for use in the input statement. For example:

 * log(base=10, val) -- Logarithm with optional 'base' as first argument.
 If not provided, 'base' defaults to '10'.
 Example: log(100) + log(e(), 100)

 * e()  -- Euler's number (2.718281828459045)
 * pi() -- π (3.141592653589793)

 * int(val)
 * ceil(val)
 * floor(val)
 * round(modulus=1, val) -- Round with optional 'modulus' as first argument.
     Example: round(1.23456) == 1 && round(0.001, 1.23456) == 1.235

 * abs(val)
 * sign(val)

 * min(val, ...) -- Example: min(1, -2, 3, -4) == -4
 * max(val, ...) -- Example: max(1, -2, 3, -4) == 3

 * sin(radians)    * asin(val)
 * cos(radians)    * acos(val)
 * tan(radians)    * atan(val)
 * sinh(val)       * asinh(val)
 * cosh(val)       * acosh(val)
 * tanh(val)       * atanh(val)

Notice that the constants pi and e must be specified as functions, pi() and e(). A number of global variables are also available to build conditional statements. These include the following:

Special Variable Names For Use In Conditional Statements:

NameDescription
nodataAn input raster's NoData value.
nullSame as nodata.
minvalueAn input raster's minimum value.
maxvalueAn input raster's maximum value.
rowsThe input raster's number of rows.
columnsThe input raster's number of columns.
rowThe grid cell's row number.
columnThe grid cell's column number.
rowyThe row's y-coordinate.
columnxThe column's x-coordinate.
northThe input raster's northern coordinate.
southThe input raster's southern coordinate.
eastThe input raster's eastern coordinate.
westThe input raster's western coordinate.
cellsizexThe input raster's grid resolution in the x-direction.
cellsizeyThe input raster's grid resolution in the y-direction.
cellsizeThe input raster's average grid resolution.

The special variable names are case-sensitive. If there are more than one raster inputs used in the statement, the functional forms of the nodata, null, minvalue, and maxvalue variables should be used, e.g. nodata("InputRaster"), otherwise the value is assumed to specify the attribute of the first raster in the statement. The following are examples of valid statements:

 "raster" != 300.0

 "raster" >= (minvalue + 35.0)

 ("raster1" >= 25.0) && ("raster2" <= 75.0) -- Evaluates to 1 where both conditions are true.

 tan("raster" * pi() / 180.0) > 1.0

 "raster" == nodata

Any grid cell in the input rasters containing the NoData value will be assigned NoData in the output raster, unless a NoData grid cell value allows the statement to evaluate to True (i.e. the mathematical expression includes the nodata value).

See Also

ConditionalEvaluation

Function Signature

def raster_calculator(self, expression: str, input_rasters: List[Raster]) -> Raster: ...

raster_cell_assignment

This tool can be used to create a new raster with the same coordinates and dimensions (i.e. rows and columns) as an existing base image. Grid cells in the new raster will be assigned either the row or column number or the x- or y-coordinate, depending on the selected option (assign flag). The user must also specify the name of the base image (input).

See Also

NewRasterFromBase

Function Signature

def raster_cell_assignment(self, raster: Raster, what_to_assign: str = "column") -> Raster: ...

raster_histogram

This tool produces a histogram (i.e. a frequency distribution graph) for the values contained within an input raster file (input). The histogram will be embedded within an output (output_html_file) HTML file, which should be automatically displayed after the tool has completed. The user may optionally specify the number of bins (num_bins) used in the histogram. If unspecified, this is calculated as:

num_bins = ((rows * columns)).log2().ceil() + 1

See Also

attribute_histogram

Function Signature

def raster_histogram(self, raster: Raster, output_html_file: str) -> None: ...

raster_perimeter

This tool can be used to measure the length of the perimeter of polygon features in a raster layer. The user must specify the name of the input raster file (input) and optionally an output raster (output), which is the raster layer containing the input features assigned the perimeter length. The user may also optionally choose to output text data (out_text). Raster-based perimeter estimation uses the accurate, anti-aliasing algorithm of Prashker (2009).

The input file must be of a categorical data type, containing discrete polygon features that have been assigned unique identifiers. Such rasters are often created by region-grouping (clump) a classified raster.

Reference

Prashker, S. (2009) An anti-aliasing algorithm for calculating the perimeter of raster polygons. Geotec, Ottawa and Geomtics Atlantic, Wolfville, NS.

See Also

raster_area, clump

Function Signature

def raster_perimeter(self, raster: Raster, units: str = "map units", zero_background: bool = False) -> Tuple[Raster, str]: ...

raster_streams_to_vector

This tool converts a raster stream file into a vector file. The user must specify an input raster streams file (streams), and an input D8 flow pointer file (d8_pointer). Streams in the input raster streams file are denoted by cells containing any positive, non-zero integer. A field in the output vector's database file, called STRM_VAL, will correspond to this positive integer value. The database file will also have a field for the length of each link in the stream network. The flow pointer file must be calculated from a DEM with all topographic depressions and flat areas removed and must be calculated using the D8 flow pointer algorithm (d8_pointer). The output vector will contain PolyLine features.

See Also

rasterize_streams, raster_to_vector_lines

Function Signature

def raster_streams_to_vector(self, streams: Raster, d8_pointer: Raster, esri_pointer: bool = False) -> Vector: ...

raster_summary_stats

This tool outputs distribution summary statistics for input raster images (input). The distribution statistics include the raster minimum, maximum, range, total, mean, variance, and standard deviation. These summary statistics are output to the system stdout.

The following is an example of the summary report:

*********************************
* Welcome to RasterSummaryStats *
*********************************
Reading data...

Number of non-nodata grid cells: 32083559
Number of nodata grid cells: 3916441
Image minimum: 390.266357421875
Image maximum: 426.0322570800781
Image range: 35.765899658203125
Image total: 13030334843.332886
Image average: 406.13745012929786
Image variance: 31.370027239143383
Image standard deviation: 5.600895217654351

See Also

raster_histogram, zonal_statistics

Function Signature

def raster_summary_stats(self, input: Raster) -> str: ...

raster_to_vector_lines

This tool converts raster lines features into a vector of the POLYLINE VectorGeometryType. Grid cells associated with line features will contain non-zero, non-NoData cell values. The algorithm requires three passes of the raster. The first pass counts
the number of line neighbours of each line cell; the second pass traces line segments starting from line ends (i.e. line cells with only one neighbouring line cell); lastly, the final pass traces any remaining line segments, which are likely forming closed loops (and therefore do not have line ends).

If the line raster contains streams, it is preferable to use the raster_streams_to_vector instead. This tool will use knowledge of flow directions to ensure connections between stream segments at confluence sites, whereas raster_to_vector_lines will not.

See Also

raster_to_vector_polygons, raster_to_vector_points, raster_streams_to_vector

Function Signature

def raster_to_vector_lines(self, raster: Raster) -> Vector: ...

raster_to_vector_points

Converts a raster data set to a vector of the POINT VectorGeometryType. The user must specify the name of a raster file (input) and the name of the output vector (output). Points will correspond with grid cell centre points. All grid cells containing non-zero, non-NoData values will be considered a point. The vector's attribute table will contain a field called 'VALUE' that will contain the cell value for each point feature.

See Also

raster_to_vector_polygons, raster_to_vector_lines

Function Signature

def raster_to_vector_points(self, raster: Raster) -> Vector: ...

raster_to_vector_polygons

Converts a raster data set to a vector of the POLYGON geometry type. The user must specify the name of a raster file (input) and the name of the output (output) vector. All grid cells containing non-zero, non-NoData values will be considered part of a polygon feature. The vector's attribute table will contain a field called 'VALUE' that will contain the cell value for each polygon feature, in addition to the standard feature ID (FID) attribute.

See Also

raster_to_vector_points, raster_to_vector_lines

Function Signature

def raster_to_vector_polygons(self, raster: Raster) -> Vector: ...

rasterize_streams

This tool can be used rasterize an input vector stream network (streams) using on Lindsay (2016) method. The user inputs an existing raster (base_raster), from which the output raster's grid resolution is determined.

Reference

Lindsay JB. 2016. The practice of DEM stream burning revisited. Earth Surface Processes and Landforms, 41(5): 658–668. DOI: 10.1002/esp.3888

See Also

raster_streams_to_vector

Function Signature

def rasterize_streams(self, streams: Vector, base_raster: Raster = None, zero_background: bool = False, use_feature_id: bool = False) -> Raster: ...

read_lidar

Returns a new Lidar object, read from a path-file string.

Parameters

  • file_name: str - The file name. If file_name does not contain the full file path, the file will be read from the Whitebox working directory.

Example

import whitebox_workflows

wbe = whitebox_workflows.WbEnvironment()
my_lidar = wbe.read_lidar("path/containing/file/file_name.laz", file_mode='w')

read_lidars

Reads multiple LiDAR files into memory at once, returning a list of Lidar objects.

Parameters

  • file_names: List[str] - The file names. If a file name does not contain the full file path, the file will be read from the Whitebox working directory.

Example

import whitebox_workflows

wbe = whitebox_workflows.WbEnvironment()
wbe.working_directory = '/path/to/data'

# Notice that you can use tuple destructuring on the resulting list of rasters
tile1, tile2, tile3 = wbe.read_lidars(['tile1.laz', 'tile2.laz', 'tile3.laz'])

read_raster

Returns a new Raster object, read into memory from a path-file string.

Parameters

  • file_name: str - The file name. If file_name does not contain the full file path, the file will be read from the Whitebox working directory.

Example

import whitebox_workflows

wbe = whitebox_workflows.WhiteboxEnvironment()
my_raster = we.read_raster("path/containing/file/file_name.tif")

read_rasters

Reads multiple raster files into memory at once, returning a list of Raster objects.

Parameters

  • file_names: List[str] - The list of file name strings. If any of the files do not contain the full file path, the file will be read from the Whitebox working directory.

Example

import whitebox_workflows

wbe = whitebox_workflows.WhiteboxEnvironment()
wbe.working_directory = '/path/to/data'

# Notice that you can use tuple destructuring on the resulting list of rasters
band1, band2, band3 = wbe.read_rasters(['band1.tif', 'band2.tif', 'band3.tif'])

read_vector

Reads a vector from disc into an in-memory Vector object.

Parameters

  • file_name: str - The file name. If file_name does not contain the full file path, the file will be read from the Whitebox working directory.

read_vectors

Reads multiple vectors from file into a list of in-memory Vector objects.

Parameters

  • file_names: List[str] - The list of file names. If any of the file names do not contain the full file path, the file will be read from the Whitebox working directory.

reciprocal

This tool creates a new raster (output) in which each grid cell is equal to one divided by the grid cell values in the input raster image (input). NoData values in the input image will be assigned NoData values in the output image.

Function Signature

def reciprocal(self, raster: Raster) -> Raster: ...

reclass

This tool creates a new raster in which the value of each grid cell is determined by an input raster (input) and a collection of user-defined classes. The user must specify the New value, the From value, and the To Just Less Than value of each class triplet of the reclass_value parameter. Classes must be mutually exclusive. Reclass values must be presented as lists-of-lists, where each row of the list contains either three (assign_mode=False) or two (assign_mode=True) values. If assign-mode is True, then the pair of values represent New value and Old value keys. As an example:

reclassed = wbe.reclass(raster, [[1.0, 0.0, 100.0], [2.0, 100.0, 200.0]], assign_mode=False)

Function Signature

def reclass(self, raster: Raster, reclass_values: List[List[float]], assign_mode: bool = False) -> Raster: ...

reclass_equal_interval

This tool reclassifies the values in an input raster (input) file based on an equal-interval scheme, where the user must specify the reclass interval value (interval), the starting value (start_val), and optionally, the ending value (end_val). Grid cells containing values that fall outside of the range defined by the starting and ending values, will be assigned their original values in the output grid. If the user does not specify an ending value, the tool will assign a very large positive value.

See Also

reclass

Function Signature

def reclass_equal_interval(self, raster: Raster, interval_size: float, start_value: float = float('-inf'), end_value: float = float('inf')) -> Raster: ...

rectangular_grid_from_raster_base

This tool can be used to create a rectangular vector grid. The extent of the rectangular grid is based on the extent of an input base raster (base). The user may also specify the origin of the grid (xorig and yorig, defaults are 0.0) and the grid cell width and height (width and height).

See Also

rectangular_grid_from_vector_base, hexagonal_grid_from_raster

Function Signature

def rectangular_grid_from_raster_base(self, base: Raster, width: float, height: float, x_origin: float = 0.0, y_origin: float = 0.0) -> Vector: ...

rectangular_grid_from_vector_base

This tool can be used to create a rectangular vector grid. The extent of the rectangular grid is based on the extent of an input base vector (base). The user may also specify the origin of the grid (xorig and yorig, defaults are 0.0) and the grid cell width and height (width and height).

See Also

rectangular_grid_from_raster_base, hexagonal_grid_from_vector

Function Signature

def rectangular_grid_from_vector_base(self, base: Vector, width: float, height: float, x_origin: float = 0.0, y_origin: float = 0.0) -> Vector: ...

reinitialize_attribute_table

Reinitializes a vector's attribute table deleting all fields but the feature ID (FID). Caution: this tool overwrites the input file's attribute table.

Function Signature

def reinitialize_attribute_table(self, input: Vector) -> None: ...

related_circumscribing_circle

This tool can be used to calculate the related circumscribing circle (Mcgarigal et al. 2002) for vector polygon features. The related circumscribing circle values calculated for each vector polygon feature will be placed in the accompanying attribute table as a new field (RC_CIRCLE).

Related circumscribing circle (RCC) is defined as:

RCC = 1 - A / Ac

Where A is the polygon's area and Ac the area of the smallest circumscribing circle.

Theoretically, related_circumscribing_circle ranges from 0 to 1, where a value of 0 indicates a circular polygon and a value of 1 indicates a highly elongated shape. The circumscribing circle provides a measure of polygon elongation. Unlike the elongation_ratio, however, it does not provide a measure of polygon direction in addition to overall elongation. Like the elongation_ratio and linearity_index, related_circumscribing_circle is not an adequate measure of overall polygon narrowness, because a highly sinuous but narrow patch will have a low related circumscribing circle index owing to the compact nature of these polygon.

Note: Holes are excluded from the area calculation of polygons.

Function Signature

def related_circumscribing_circle(self, input: Vector) -> Vector: ...

relative_aspect

This tool creates a new raster in which each grid cell is assigned the terrain aspect relative to a user-specified direction (azimuth). Relative terrain aspect is the angular distance (measured in degrees) between the land-surface aspect and the assumed regional wind azimuth (Bohner and Antonic, 2007). It is bound between 0-degrees (windward direction) and 180-degrees (leeward direction). Relative terrain aspect is the simplest of the measures of topographic exposure to wind, taking into account terrain orientation only and neglecting the influences of topographic shadowing by distant landforms and the deflection of wind by topography.

The user must input a digital elevation model (DEM) (dem) and an azimuth (i.e. a wind direction). The Z Conversion Factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor.

Reference

Böhner, J., and Antonić, O. (2009). Land-surface parameters specific to topo-climatology. Developments in Soil Science, 33, 195-226.

See Also

aspect

Function Signature

def relative_aspect(self, dem: Raster, azimuth: float = 0.0, z_factor: float = 1.0) -> Raster: ...

relative_stream_power_index

This tool can be used to calculate the relative stream power (RSP) index. This index is directly related to the stream power if the assumption can be made that discharge is directly proportional to upslope contributing area (As; sca). The index is calculated as:

RSP = Asp × tan(β)

where As is the specific catchment area (i.e. the upslope contributing area per unit contour length) estimated using one of the available flow accumulation algorithms; β is the local slope gradient in degrees (slope); and, p (exponent) is a user-defined exponent term that controls the location-specific relation between contributing area and discharge. Notice that As must not be log-transformed prior to being used; As is commonly log-transformed to enhance visualization of the data. The slope raster can be created from the base digital elevation model (DEM) using the slope tool. The input images must have the same grid dimensions.

Reference

Moore, I. D., Grayson, R. B., and Ladson, A. R. (1991). Digital terrain modelling: a review of hydrological, geomorphological, and biological applications. Hydrological processes, 5(1), 3-30.

See Also

sediment_transport_index, slope, D8FlowAccumulation DInfFlowAccumulation, FD8FlowAccumulation

Function Signature

def relative_stream_power_index(self, specific_catchment_area: Raster, slope: Raster, exponent: float = 1.0) -> Raster: ...

relative_topographic_position

Relative topographic position (RTP) is an index of local topographic position (i.e. how elevated or low-lying a site is relative to its surroundings) and is a modification of percent elevation range (PER; percent_elev_range) and accounts for the elevation distribution. Rather than positioning the central cell's elevation solely between the filter extrema, RTP is a piece-wise function that positions the central elevation relative to the minimum (zmin), mean (μ), and maximum values (zmax), within a local neighbourhood of a user-specified size (filterx, filtery), such that:

RTP = (z0 − μ) / (μ − zmin), if z0 < μ

OR

RTP = (z0 − μ) / (zmax - μ), if z0 >= μ 

The resulting index is bound by the interval [−1, 1], where the sign indicates if the cell is above or below than the filter mean. Although RTP uses the mean to define two linear functions, the reliance on the filter extrema is expected to result in sensitivity to outliers. Furthermore, the use of the mean implies assumptions of unimodal and symmetrical elevation distribution.

In many cases, Elevation Percentile (ElevPercentile) and deviation from mean elevation (DevFromMeanElev) provide more suitable and robust measures of relative topographic position.

Reference

Newman, D. R., Lindsay, J. B., and Cockburn, J. M. H. (2018). Evaluating metrics of local topographic position for multiscale geomorphometric analysis. Geomorphology, 312, 40-50.

See Also

DevFromMeanElev, DiffFromMeanElev, ElevPercentile, percent_elev_range

Function Signature

def relative_topographic_position(self, dem: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

remove_duplicates

This tool removes duplicate points from a LiDAR data set. Duplicates are determined by their x, y, and optionally (include_z) z coordinates.

See Also

eliminate_coincident_points

Function Signature

def remove_duplicates(self, input: Lidar, include_z: bool = False) -> Lidar: ...

remove_off_terrain_objects

This tool can be used to create a bare-earth DEM from a fine-resolution digital surface model. The tool is typically applied to LiDAR DEMs which frequently contain numerous off-terrain objects (OTOs) such as buildings, trees and other vegetation, cars, fences and other anthropogenic objects. The algorithm works by finding and removing steep-sided peaks within the DEM. All peaks within a sub-grid, with a dimension of the user-specified maximum OTO size (filter), in pixels, are identified and removed. Each of the edge cells of the peaks are then examined to see if they have a slope that is less than the user-specified minimum OTO edge slope (slope) and a back-filling procedure is used. This ensures that OTOs are distinguished from natural topographic features such as hills. The DEM is preprocessed using a white top-hat transform, such that elevations are normalized for the underlying ground surface.

Note that this tool is appropriate to apply to rasterized LiDAR DEMs. Use the lidar_ground_point_filter tool to remove or classify OTOs within a LiDAR point-cloud.

Reference

J.B. Lindsay (2018) A new method for the removal of off-terrain objects from LiDAR-derived raster surface models. Available online, DOI: 10.13140/RG.2.2.21226.62401

See Also

map_off_terrain_objects, tophat_transform, lidar_ground_point_filter

Function Signature

def remove_off_terrain_objects(self, dem: Raster, filter_size: int = 11, slope_threshold: float = 15.0) -> Raster: ...

remove_polygon_holes

This tool can be used to remove holes from the features within a vector polygon file. The user must specify the name of the input vector file, which must be of a polygon VectorGeometryType, and the name of the output file.

Function Signature

def remove_polygon_holes(self, input: Vector) -> Vector: ...

remove_short_streams

This tool can be used to remove stream links in a stream network that are shorter than a user-specified length (min_length). The user must input a streams raster image (streams_raster) and D8 pointer (flow direction) image (d8_pntr). Stream cells are designated in the streams raster as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer raster is used to traverse the stream network and should only be created using the D8 algorithm (d8_pointer).

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the user must specify esri_pntr=True.

See Also

extract_streams, d8_pointer

Function Signature

def remove_short_streams(self, d8_pntr: Raster, streams_raster: Raster, min_length: float = 0.0, esri_pntr: bool = False) -> Raster: ...

remove_spurs

This image processing tool removes small irregularities (i.e. spurs) on the boundaries of objects in a Boolean input raster image (input). This operation is sometimes called pruning. Remove Spurs is a useful tool for cleaning an image before performing a line thinning operation. In fact, the input image need not be truly Boolean (i.e. contain only 1's and 0's). All non-zero, positive values are considered to be foreground pixels while all zero valued cells are considered background pixels.

Note: Unlike other filter-based operations in WhiteboxTools, this algorithm can't easily be parallelized because the output raster must be read and written to during the same loop.

See Also

line_thinning

Function Signature

def remove_spurs(self, raster: Raster, max_iterations: int = 10) -> Raster: ...

repair_stream_vector_topology

This tool can be used to resolve many of the topological errors and inconsistencies associated with manually digitized vector stream networks, i.e. hydrography data. A properly structured stream network should consist of a series of stream segments that connect a channel head to a downstream confluence, or an upstream confluence to a downstream confluence/outlet. This tool will join vector arcs that connect at arbitrary, non-confluence points along stream segments. It also splits an arc where a tributary stream connects at a mid-point, thereby creating a proper confluence where two upstream triburaries converge into a downstream segment. The tool also handles non-connecting tributaries caused by dangling arcs, i.e. overshoots and undershoots.

The user must specify the name of the input vector stream network (input) and the output file (output). Additionally, a distance threshold for snapping dangling arcs (snap) must be specified. This distance is in the input layer's x-y units. The tool works best on projected input data, however, if the input are in geographic coordinates (latitude and longitude), then specifying a small valued snap distance is advisable. Notice that the attributes of the input layer will not be carried over to the output file because there is not a one-for-one feature correspondence between the two files due to the joins and splits of stream segments. Instead the output attribute table will only contain a feature ID (FID) entry.

Note: this tool should be used to pre-process vector streams that are input to the vector_stream_network_analysis tool.

See Also

vector_stream_network_analysis, fix_dangling_arcs

resample

This tool can be used to modify the grid resolution of one or more rasters. The user specifies the names of one or more input rasters (inputs). The resolution of the output raster is determined either using a specified cell_size parameter, in which case the output extent is determined by the combined extent of the inputs, or by an optional base raster (base), in which case the output raster spatial extent matches that of the base file. This operation is similar to the mosaic tool, except that resample modifies the output resolution. The resample tool may also be used with a single input raster (when the user wants to modify its spatial resolution, whereas, mosaic always includes multiple inputs.

If the input source images are more extensive than the base image (if optionally specified), these areas will not be represented in the output image. Grid cells in the output image that are not overlapping with any of the input source images will not be assigned the NoData value, which will be the same as the first input image. Grid cells in the output image that overlap with multiple input raster cells will be assigned the last input value in the stack. Thus, the order of input images is important.

See Also

mosaic

Function Signature

def resample(self, input_rasters: List[Raster], cell_size: float = 0.0, base_raster: Raster = None, method: str = "cc") -> Raster: ...

rescale_value_range

Function Signature

def rescale_value_range(self, raster: Raster, out_min_val: float, out_max_val: float, clip_min: float = float('inf'), clip_max: float = float('-inf')) -> Raster: ...

rgb_to_ihs

This tool transforms three raster images of multispectral data (red, green, and blue channels) into their equivalent intensity, hue, and saturation (IHS; sometimes HSI or HIS) images. Intensity refers to the brightness of a color, hue is related to the dominant wavelength of light and is perceived as color, and saturation is the purity of the color (Koutsias et al., 2000). There are numerous algorithms for performing a red-green-blue (RGB) to IHS transformation. This tool uses the transformation described by Haydn (1982). Note that, based on this transformation, the output IHS values follow the ranges:

0 < I < 1

0 < H < 2PI

0 < S < 1

The user must specify the names of the red, green, and blue images (red, green, blue). Importantly, these images need not necessarily correspond with the specific regions of the electromagnetic spectrum that are red, green, and blue. Rather, the input images are three multispectral images that could be used to create a RGB color composite. The user must also specify the names of the output intensity, hue, and saturation images (intensity, hue, saturation). Image enhancements, such as contrast stretching, are often performed on the IHS components, which are then inverse transformed back in RGB components to then create an improved color composite image.

References

Haydn, R., Dalke, G.W. and Henkel, J. (1982) Application of the IHS color transform to the processing of multisensor data and image enhancement. Proc. of the Inter- national Symposium on Remote Sensing of Arid and Semiarid Lands, Cairo, 599-616.

Koutsias, N., Karteris, M., and Chuvico, E. (2000). The use of intensity-hue-saturation transformation of Landsat-5 Thematic Mapper data for burned land mapping. Photogrammetric Engineering and Remote Sensing, 66(7), 829-840.

See Also

ihs_to_rgb, balance_contrast_enhancement, direct_decorrelation_stretch

Function Signature

def rgb_to_ihs(self, red: Optional[Raster] = None, green: Optional[Raster] = None, blue: Optional[Raster] = None, composite: Optional[Raster] = None) -> Tuple[Raster, Raster, Raster]: ...

rho8_flow_accum

This tool is used to generate a flow accumulation grid (i.e. contributing area) using the Fairfield and Leymarie (1991) flow algorithm, often called Rho8. Like the D8 flow method, this algorithm is an examples of a single-flow-direction (SFD) method because the flow entering each grid cell is routed to only one downslope neighbour, i.e. flow divergence is not permitted. The user must specify the name of the input file (input), which may be either a digital elevation model (DEM) or a Rho8 pointer file (see rho8_pointer). If a DEM is input, it must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achieved using either the breach_depressions_least_cost (also breach_depressions_least_cost) or fill_depressions tool.

In addition to the input and output (output)files, the user must also specify the output type (out_type). The output flow-accumulation can be: 1) cells (i.e. the number of inflowing grid cells), catchment area (i.e. the upslope area), or specific contributing area (i.e. the catchment area divided by the flow width). The default value is specific contributing area. The user must also specify whether the output flow-accumulation grid should be log-tranformed (log), i.e. the output, if this option is selected, will be the natural-logarithm of the accumulated flow value. This is a transformation that is often performed to better visualize the contributing area distribution. Because contributing areas tend to be very high along valley bottoms and relatively low on hillslopes, when a flow-accumulation image is displayed, the distribution of values on hillslopes tends to be 'washed out' because the palette is stretched out to represent the highest values. Log-transformation provides a means of compensating for this phenomenon. Importantly, however, log-transformed flow-accumulation grids must not be used to estimate other secondary terrain indices, such as the wetness index (wetness_index), or relative stream power index (StreamPowerIndex).

If a Rho8 pointer is used as the input raster, the user must specify this (pntr). Similarly, if a pointer input is used and the pointer follows the Esri pointer convention, rather than the default WhiteboxTools convension for pointer files, then this must also be specified (esri_pntr).

Reference

Fairfield, J., and Leymarie, P. 1991. Drainage networks from grid digital elevation models. Water Resources Research, 27(5), 709-717.

See Also

rho8_pointer, D8FlowAccumulation, qin_flow_accumulation, FD8FlowAccumulation, DInfFlowAccumulation, MDInfFlowAccumulation, wetness_index

Function Signature

def rho8_flow_accum(self, raster: Raster, out_type: str = "sca", log_transform: bool = False, clip: bool = False, input_is_pointer: bool = False, esri_pntr: bool = False) -> Raster: ...

rho8_pointer

This tool is used to generate a flow pointer grid (i.e. flow direction) using the stochastic Rho8 (J. Fairfield and P. Leymarie, 1991) algorithm. Like the D8 flow algorithm (d8_pointer), Rho8 is a single-flow-direction (SFD) method because the flow entering each grid cell is routed to only one downslope neighbour, i.e. flow divergence is not permitted. The user must specify the name of a digital elevation model (DEM) file (dem) that has been hydrologically corrected to remove all spurious depressions and flat areas (breach_depressions_least_cost, fill_depressions). The output of this tool (output) is often used as the input to the Rho8FlowAccumulation tool.

By default, the Rho8 flow pointers use the following clockwise, base-2 numeric index convention:

...
641281
3202
1684

Notice that grid cells that have no lower neighbours are assigned a flow direction of zero. In a DEM that has been pre-processed to remove all depressions and flat areas, this condition will only occur along the edges of the grid. If the pointer file contains ESRI flow direction values instead, the esri_pntr parameter must be specified.

Grid cells possessing the NoData value in the input DEM are assigned the NoData value in the output image.

Memory Usage

The peak memory usage of this tool is approximately 10 bytes per grid cell.

References

Fairfield, J., and Leymarie, P. 1991. Drainage networks from grid digital elevation models. Water Resources Research, 27(5), 709-717.

See Also

Rho8FlowAccumulation, d8_pointer, fd8_pointer, DInfPointer, breach_depressions_least_cost, fill_depressions

Function Signature

def rho8_pointer(self, dem: Raster, esri_pntr: bool = False) -> Raster: ...

roberts_cross_filter

This tool performs Robert's Cross edge-detection filter on a raster image. The roberts_cross_filter
is similar to the sobel_filter and prewitt_filter, in that it identifies areas of high slope in the input image through the calculation of slopes in the x and y directions. A Robert's Cross filter uses the following 2 × 2 schemes to calculate slope magnitude, |G|:

..
P1P2
P3P4

|G| = |P1 - P4| + |P2- P3|

Note, the filter is centered on pixel P1 and P2, P3, and P4 are the neighbouring pixels towards the east, south, and south-east respectively.

The output image may be overwhelmed by a relatively small number of high-valued pixels, stretching the palette. The user may therefore optionally clip the output image distribution tails by a specified amount (clip) for improved visualization.

Reference

Fisher, R. 2004. Hypertext Image Processing Resources 2 (HIPR2). Available online: http://homepages.inf.ed.ac.uk/rbf/HIPR2/roberts.htm

See Also

sobel_filter, prewitt_filter

Function Signature

def roberts_cross_filter(self, raster: Raster, clip_amount: float = 0.0) -> Raster: ...

root_mean_square_error

This tool calculates the root-mean-square-error (RMSE) or root-mean-square-difference (RMSD) from two input rasters. If the two input rasters possess the same number of rows and columns, the RMSE is calucated on a cell-by-cell basis, otherwise bilinear resampling is used. In addition to RMSE, the tool also reports other common accuracy statistics including the mean verical error, the 95% confidence limit (RMSE x 1.96), and the 90% linear error (LE90), which is the 90% percentile of the residuals between two raster surfaces. The LE90 is the most robust of the reported accuracy statistics when the residuals are non-Gaussian. The LE90 requires sorting the residual values, which can be a relatively slow operation for larger rasters.

See Also

paired_sample_t_test, wilcoxon_signed_rank_test

Function Signature

def root_mean_square_error(self, input: Raster, reference: Raster) -> str: ...

ruggedness_index

The terrain ruggedness index (TRI) is a measure of local topographic relief. The TRI calculates the root-mean-square-deviation (RMSD) for each grid cell in a digital elevation model (DEM), calculating the residuals (i.e. elevation differences) between a grid cell and its eight neighbours. Notice that, unlike the output of this tool, the original Riley et al. (1999) TRI did not normalize for the number of cells in the local window (i.e. it is a root-square-deviation only). However, using the mean has the advantage of allowing for the varying number of neighbouring cells along the grid edges and in areas bordering NoData cells. This modification does however imply that the output of this tool cannot be directly compared with the index ranges of level to extremely rugged terrain provided in Riley et al. (1999)

Reference

Riley, S. J., DeGloria, S. D., and Elliot, R. (1999). Index that quantifies topographic heterogeneity. Intermountain Journal of Sciences, 5(1-4), 23-27.

See Also

relative_topographic_position, DevFromMeanElev

Function Signature

def ruggedness_index(self, input: Raster) -> Raster: ...

scharr_filter

This tool performs a Scharr edge-detection filter on a raster image. The Scharr filter is similar to the sobel_filter and prewitt_filter, in that it identifies areas of high slope in the input image through the calculation of slopes in the x and y directions. A 3 × 3 Scharr filter uses the following schemes to calculate x and y slopes:

X-direction slope

...
30-3
100-10
30-3

Y-direction slope

...
3103
000
-3-10-3

Each grid cell in the output image is assigned the square-root of the squared sum of the x and y slopes.

The output image may be overwhelmed by a relatively small number of high-valued pixels, stretching the palette. The user may therefore optionally clip the output image distribution tails by a specified amount (clip) for improved visualization.

See Also

sobel_filter, prewitt_filter

Function Signature

def scharr_filter(self, raster: Raster, clip_tails: float = 0.0) -> Raster: ...

sediment_transport_index

This tool calculates the sediment transport index, or sometimes, length-slope (LS) factor, based on input specific contributing area (As, i.e. the upslope contributing area per unit contour length; sca) and slope gradient (β, measured in degrees; slope) rasters. Moore et al. (1991) state that the physical potential for sheet and rill erosion in upland catchments can be evaluated by the product R K LS, a component of the Universal Soil Loss Equation (USLE), where R is a rainfall and runoff erosivity factor, K is a soil erodibility factor, and LS is the length-slope factor that accounts for the effects of topography on erosion. To predict erosion at a point in the landscape the LS factor can be written as:

LS = (n + 1)(As / 22.13)n(sin(β) / 0.0896)m

where n = 0.4 (sca_exponent) and m = 1.3 (slope_exponent) in its original formulation.

This index is derived from unit stream-power theory and is sometimes used in place of the length-slope factor in the revised universal soil loss equation (RUSLE) for slope lengths less than 100 m and slope less than 14 degrees. Like many hydrological land-surface parameters sediment_transport_index assumes that contributing area is directly related to discharge. Notice that As must not be log-transformed prior to being used; As is commonly log-transformed to enhance visualization of the data. Also, As can be derived using any of the available flow accumulation tools, alghough better results usually result from application of multiple-flow direction algorithms such as DInfFlowAccumulation and FD8FlowAccumulation. The slope raster can be created from the base digital elevation model (DEM) using the slope tool. The input images must have the same grid dimensions.

Reference

Moore, I. D., Grayson, R. B., and Ladson, A. R. (1991). Digital terrain modelling: a review of hydrological, geomorphological, and biological applications. Hydrological processes, 5(1), 3-30.

See Also

StreamPowerIndex, DInfFlowAccumulation, FD8FlowAccumulation

Function Signature

def sediment_transport_index(self, specific_catchment_area: Raster, slope: Raster, sca_exponent: float = 0.4, slope_exponent: float = 1.3) -> Raster: ...

select_tiles_by_polygon

This tool copies LiDAR tiles overlapping with a polygon into an output directory. In actuality, the tool performs point-in-polygon operations, using the four corner points, the center point, and the four mid-edge points of each LiDAR tile bounding box and the polygons. This representation of overlapping geometry aids with performance. This approach generally works well when the polygon size is large relative to the LiDAR tiles. If, however, the input polygon is small relative to the tile size, this approach may miss some copying some tiles. It is advisable to buffer the polygon if this occurs.

See Also

lidar_tile_footprint

Function Signature

def select_tiles_by_polygon(self, input_directory: str, output_directory: str, polygons: Vector) -> None: ...

set_nodata_value

This tool will re-assign a user-defined background value in an input raster image the NoData value. More precisely, the NoData value will be changed to the specified background value and any existing grid cells containing the previous NoData value, if it had been defined, will be changed to this new value. Most WhiteboxTools tools recognize NoData grid cells and treat them specially. NoData grid cells are also often displayed transparently by GIS software. The user must specify the names of the input and output rasters and the background value. The default background value is zero, although any numeric value is possible.

This tool differs from the ModifyNoDataValue tool in that it simply updates the NoData value in the raster header, without modifying pixel values. The ModifyNoDataValue tool will update the value in the header, and then modify each existing NoData pixel to contain this new value. Also, set_nodata_value does not overwrite the input file, while the ModifyNoDataValue tool does.

This tool may result in a change in the data type of the output image compared with the input image, if the background value is set to a negative value and the input image data type is an unsigned integer. In some cases, this may result in a doubling of the storage size of the output image.

See Also

ModifyNoDataValue, convert_nodata_to_zero, IsNoData

Function Signature

def set_nodata_value(self, raster: Raster, back_value: float = 0.0) -> Raster: ...

shape_complexity_index_raster

This tools calculates a type of shape complexity index for raster objects. The index is equal to the average number of intersections of the group of vertical and horizontal transects passing through an object. Simple objects will have a shape complexity index of 1.0 and more complex shapes, including those containing numerous holes or are winding in shape, will have higher index values. Objects in the input raster (input) are designated by their unique identifiers. Identifier values should be positive, non-zero whole numbers.

See Also

ShapeComplexityIndex, boundary_shape_complexity

Function Signature

def shape_complexity_index_raster(self, raster: Raster) -> Raster: ...

shape_complexity_index_vector

This tool provides a measure of overall polygon shape complexity, or irregularity, for vector polygons. Several shape indices have been created to compare a polygon's shape to simple Euclidean shapes (e.g. circles, squares, etc.). One of the problems with this approach is that it inherently convolves the characteristics of polygon complexity and elongation. The Shape Complexity Index (SCI) was developed as a parameter for assessing the complexity of a polygon that is independent of its elongation.

SCI relates a polygon's shape to that of an encompassing convex hull. It is defined as:

SCI = 1 - A / Ah

Where A is the polygon's area and Ah is the area of the convex hull containing the polygon. Convex polygons, i.e. those that do not contain concavities or holes, have a value of 0. As the shape of the polygon becomes more complex, the SCI approaches 1. Note that polygon shape complexity also increases with the greater number of holes (i.e. islands), since holes have the effect of reducing the lake area.

The SCI values calculated for each vector polygon feature will be placed in the accompanying database file (.dbf) as a complexity field (COMPLEXITY).

See Also

shape_complexity_index_raster

Function Signature

def shape_complexity_index_vector(self, input: Vector) -> Vector: ...

shreve_stream_magnitude

This tool can be used to assign the Shreve stream magnitude to each link in a stream network. Stream ordering is often used in hydro-geomorphic and ecological studies to quantify the relative size and importance of a stream segment to the overall river system. There are several competing stream ordering schemes. Shreve stream magnitude is equal to the number of headwater links upstream of each link. Headwater stream links are assigned a magnitude of one.

The user must input a streams raster image (streams_raster) and D8 pointer (flow direction) image (d8_pntr). Stream cells are designated in the streams raster as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer image is used to traverse the stream network and should only be created using the D8 algorithm. Background cells will be assigned the NoData value in the output image, unless the user specifies zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the user should specify esri_pntr=True.

Reference Shreve, R. L. (1966). Statistical law of stream numbers. The Journal

of Geology, 74(1), 17-37.

See Also

horton_stream_order, hack_stream_order, strahler_stream_order, topological_stream_order

Function Signature

def shreve_stream_magnitude(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

sigmoidal_contrast_stretch

This tool performs a sigmoidal stretch on a raster image. This is a transformation where the input image value for a grid cell (zin) is transformed to an output value zout such that:

zout = (1.0 / (1.0 + exp(gain(cutoff - z))) - a ) / b x num_tones

where,

z = (zin - MIN) / RANGE,

a = 1.0 / (1.0 + exp(gain x cutoff)),

b = 1.0 / (1.0 + exp(gain x (cutoff - 1.0))) - 1.0 / (1.0 + exp(gain x cutoff)),

MIN and RANGE are the minimum value and data range in the input image respectively and gain and cutoff are user specified parameters (gain, cutoff).

Like all of WhiteboxTools's contrast enhancement tools, this operation will work on either greyscale or RGB input images.

See Also

piecewise_contrast_stretch, gaussian_contrast_stretch, histogram_equalization, min_max_contrast_stretch, percentage_contrast_stretch, standard_deviation_contrast_stretch

Function Signature

def sigmoidal_contrast_stretch(self, raster: Raster, cutoff: float = 0.0, gain: float = 1.0, num_tones: int = 256) -> Raster: ...

singlepart_to_multipart

This tool can be used to convert a vector file containing single-part features into a vector containing multi-part features. The user has the option to either group features based on an ID Field (field flag), which is a categorical field within the vector's attribute table. The ID Field should either be of String (text) or Integer type. Fields containing decimal values are not good candidates for the ID Field. If no field flag is specified, all features will be grouped together into one large multi-part vector.

This tool works for vectors containing either point, line, or polygon features. Since vectors of a POINT VectorGeometryType cannot represent multi-part features, the VectorGeometryType of the output file will be modified to a MULTIPOINT VectorGeometryType if the input file is of a POINT VectorGeometryType. If the input vector is of a POLYGON VectorGeometryType, the user can optionally set the algorithm to search for polygons that should be represented as hole parts. In the case of grouping based on an ID Field, hole parts are polygon features contained within larger polygons of the same ID Field value. Please note that searching for polygon holes may significantly increase processing time for larger polygon coverages.

See Also

MultiPartToSinglePart

Function Signature

def singlepart_to_multipart(self, input: Vector, field_name: str) -> Vector: ...

sink

This tool measures the depth that each grid cell in an input (dem) raster digital elevation model (DEM) lies within a sink feature, i.e. a closed topographic depression. A sink, or depression, is a bowl-like landscape feature, which is characterized by interior drainage and groundwater recharge. The depth_in_sink tool operates by differencing a filled DEM, using the same depression filling method as fill_depressions, and the original surface model.

In addition to the names of the input DEM (dem) and the output raster (output), the user must specify whether the background value (i.e. the value assigned to grid cells that are not contained within sinks) should be set to 0.0 (zero_background) Without this optional parameter specified, the tool will use the NoData value as the background value.

Reference

Antonić, O., Hatic, D., & Pernar, R. (2001). DEM-based depth in sink as an environmental estimator. Ecological Modelling, 138(1-3), 247-254.

See Also

fill_depressions

Function Signature

def sink(self, dem: Raster, zero_background: bool = False) -> Raster: ...

slope

This tool calculates slope gradient (i.e. slope steepness in degrees, radians, or percent) for each grid cell in an input digital elevation model (DEM). The user must input a DEM (dem). The Z conversion factor is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z conversion factor.

The tool uses Horn's (1981) 3rd-order finite difference method to estimate slope. Given the following clock-type grid cell numbering scheme (Gallant and Wilson, 2000),

| 7 | 8 | 1 |
| 6 | 9 | 2 |
| 5 | 4 | 3 |

slope = arctan(fx2 + fy2)0.5

where,

fx = (z3 - z5 + 2(z2 - z6) + z1 - z7) / 8 * Δx

and,

fy = (z7 - z5 + 2(z8 - z4) + z1 - z3) / * Δy

Δx and Δy are the grid resolutions in the x and y direction respectively

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

See Also

aspect, plan_curvature, profile_curvature

Function Signature

def slope(self, dem: Raster, units: str = "degrees", z_factor: float = 1.0) -> Raster: ...

slope_vs_elev_plot

This tool can be used to create a slope versus average elevation plot for one or more digital elevation models (DEMs). Similar to a hypsometric analysis (hypsometric_analysis), the slope-elevation relation can reveal the basic topographic character of a site. The output of this analysis is an HTML document (output) that contains the slope-elevation chart. The tool can plot multiple slope-elevation analyses on the same chart by specifying multiple input DEM files (inputs). Each input DEM can have an optional watershed in which the slope-elevation analysis is confined by specifying the optional watershed flag. If multiple input DEMs are used, and a watershed is used to confine the analysis to a sub-area, there must be the same number of input raster watershed files as input DEM files. The order of the DEM and watershed files must the be same (i.e. the first DEM file must correspond to the first watershed file, the second DEM file to the second watershed file, etc.). Each watershed file may contain one or more watersheds, designated by unique identifiers.

See Also

hypsometric_analysis, slope_vs_aspect_plot

Function Signature

def slope_vs_elev_plot(self, dem_rasters: List[Raster], output_html_file: str, watershed_rasters: List[Raster]) -> None: ...

smooth_vectors

This tool smooths a vector coverage of either a POLYLINE or POLYGON base VectorGeometryType. The algorithm uses a simple moving average method for smoothing, where the size of the averaging window is specified by the user. The default filter size is 3 and can be any odd integer larger than or equal to 3. The larger the averaging window, the greater the degree of line smoothing.

Function Signature

def smooth_vectors(self, input: Vector, filter_size: int = 3) -> Vector: ...

snap_pour_points

This tool measures the depth that each grid cell in an input (dem) raster digital elevation model (DEM) lies within a sink feature, i.e. a closed topographic depression. A sink, or depression, is a bowl-like landscape feature, which is characterized by interior drainage and groundwater recharge. The depth_in_sink tool operates by differencing a filled DEM, using the same depression filling method as fill_depressions, and the original surface model.

In addition to the names of the input DEM (dem) and the output raster (output), the user must specify whether the background value (i.e. the value assigned to grid cells that are not contained within sinks) should be set to 0.0 (zero_background) Without this optional parameter specified, the tool will use the NoData value as the background value.

Reference

Antonić, O., Hatic, D., & Pernar, R. (2001). DEM-based depth in sink as an environmental estimator. Ecological Modelling, 138(1-3), 247-254.

See Also

fill_depressions

Function Signature

def snap_pour_points(self, pour_pts: Vector, flow_accum: Raster, snap_dist: float = 0.0) -> Vector: ...

sobel_filter

This tool performs a 3 × 3 or 5 × 5 Sobel edge-detection filter on a raster image. The Sobel filter is similar to the prewitt_filter, in that it identifies areas of high slope in the input image through the calculation of slopes in the x and y directions. The Sobel edge-detection filter, however, gives more weight to nearer cell values within the moving window, or kernel. For example, a 3 × 3 Sobel filter uses the following schemes to calculate x and y slopes:

X-direction slope

...
-101
-202
-101

Y-direction slope

...
121
000
-1-2-1

Each grid cell in the output image is assigned the square-root of the squared sum of the x and y slopes.

The user must specify the variant, including '3x3' and '5x5' variants. The user may also optionally clip the output image distribution tails by a specified amount (e.g. 1%).

See Also

prewitt_filter

Function Signature

def sobel_filter(self, raster: Raster, variant: str = "3x3", clip_tails: float = 0.0) -> Raster: ...

spherical_std_dev_of_normals

This tool can be used to calculate the spherical standard deviation of the distribution of surface normals for an input digital elevation model (DEM; dem). This is a measure of the angular dispersion of the surface normal vectors within a local neighbourhood of a specified size (filter). spherical_std_dev_of_normals is therefore a measure of surface shape complexity, texture, and roughness. The spherical standard deviation (s) is defined as:

s = √[-2ln(R / N)] × 180 / π

where R is the resultant vector length and N is the number of unit normal vectors within the local neighbourhood. s is measured in degrees and is zero for simple planes and increases infinitely with increasing surface complexity or roughness. Note that this formulation of the spherical standard deviation assumes an underlying wrapped normal distribution.

The local neighbourhood size (filter) must be any odd integer equal to or greater than three. Grohmann et al. (2010) found that vector dispersion, a related measure of angular dispersion, increases monotonically with scale. This is the result of the angular dispersion measure integrating (accumulating) all of the surface variance of smaller scales up to the test scale. A more interesting scale relation can therefore be estimated by isolating the amount of surface complexity associated with specific scale ranges. That is, at large spatial scales, s should reflect the texture of large-scale landforms rather than the accumulated complexity at all smaller scales, including microtopographic roughness. As such, this tool normalizes the surface complexity of scales that are smaller than the filter size by applying Gaussian blur (with a standard deviation of one-third the filter size) to the DEM prior to calculating R. In this way, the resulting distribution is able to isolate and highlight the surface shape complexity associated with landscape features of a similar scale to that of the filter size.

This tool makes extensive use of integral images (i.e. summed-area tables) and parallel processing to ensure computational efficiency. It may, however, require substantial memory resources when applied to larger DEMs.

References

Grohmann, C. H., Smith, M. J., & Riccomini, C. (2010). Multiscale analysis of topographic surface roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing, 49(4), 1200-1213.

Hodgson, M. E., and Gaile, G. L. (1999). A cartographic modeling approach for surface orientation-related applications. Photogrammetric Engineering and Remote Sensing, 65(1), 85-95.

Lindsay J. B., Newman* D. R., Francioni, A. 2019. Scale-optimized surface roughness for topographic analysis. Geosciences, 9(7) 322. DOI: 10.3390/geosciences9070322.

See Also

circular_variance_of_aspect, multiscale_roughness, edge_density, surface_area_ratio, ruggedness_index

Function Signature

def spherical_std_dev_of_normals(self, dem: Raster, filter_size: int = 11) -> Raster: ...

split_colour_composite

This tool can be used to split a red-green-blue (RGB) colour-composite image into three separate bands of multi-spectral imagery. The user must specify the input image (input) and output red, green, blue images.

See Also

create_colour_composite

Function Signature

def split_colour_composite(self, composite_image: Raster) -> Tuple[Raster, Raster, Raster]: ...

split_vector_lines

This tool can be used to divide longer vector lines (input) into segments of a maximum specified length (length).

See Also

assess_route

Function Signature

def split_vector_lines(self, input: Vector, segment_length: float) -> Vector: ...

split_with_lines

This tool splits the lines or polygons in one layer using the lines in another layer to define the breaking points. Intersection points between geometries in both layers are considered as split points. The input layer (input) can be of either POLYLINE or POLYGON VectorGeometryType and the output file will share this geometry type. The user must also specify an split layer (split), of POLYLINE VectorGeometryType, used to bisect the input geometries.

Each split geometry's attribute record will contain FID and PARENT_FID values and all of the attributes (excluding FID's) of the input layer.

See Also

'MergeLineSegments'

Function Signature

def split_with_lines(self, input: Vector, split_vector: Vector) -> Vector: ...

standard_deviation_contrast_stretch

This tool performs a standard deviation contrast stretch on a raster image. This operation maps each grid cell value in the input raster image (zin) onto a new scale that ranges from a lower-tail clip value (min_val) to the upper-tail clip value (max_val), with the user-specified number of tonal values (num_tones), such that:

zout = ((zin – min_val)/(max_val – min_val)) x num_tones

where zout is the output value. The values of min_val and max_val are determined based on the image mean and standard deviation. Specifically, the user must specify the number of standard deviations (clip or stdev) to be used in determining the min and max clip values. The tool will then calculate the input image mean and standard deviation and estimate the clip values from these statistics.

This is the same kind of stretch that is used to display raster type data on the fly in many GIS software packages.

See Also

piecewise_contrast_stretch, gaussian_contrast_stretch, histogram_equalization, min_max_contrast_stretch, percentage_contrast_stretch, sigmoidal_contrast_stretch

Function Signature

def standard_deviation_contrast_stretch(self, raster: Raster, clip: float = 2.0, num_tones: int = 256) -> Raster: ...

standard_deviation_filter

This tool performs a standard deviation filter on an input image (input). A standard deviation filter assigns to each cell in the output grid the standard deviation, a measure of dispersion, of the values contained within a moving window centred on each grid cell.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

range_filter, total_filter

Function Signature

def standard_deviation_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

standard_deviation_of_slope

Calculates the standard deviation of slope from an input DEM, a metric of roughness described by Grohmann et al., (2011).

Function Signature

def standard_deviation_of_slope(self, dem: Raster, filter_size: int = 11, z_factor: float = 1.0) -> Raster: ...

standard_deviation_overlay

This tool can be used to find the standard deviation of the values in each raster cell from a set of input rasters (inputs). NoData values in any of the input images will result in a NoData pixel in the output image (output).

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

min_overlay, max_overlay

Function Signature

def standard_deviation_overlay(self, input_rasters: List[Raster]) -> Raster: ...

stochastic_depression_analysis

This tool performs a stochastic analysis of depressions within a DEM, calculating the probability of each cell belonging to a depression. This land-surface parameter (pdep) has been widely applied in wetland and bottom-land mapping applications.

This tool differs from the original Whitebox GAT tool in a few significant ways:

  1. The Whitebox GAT tool took an error histogram as an input. In practice people found it difficult to create this input. Usually they just generated a normal distribution in a spreadsheet using information about the DEM root-mean-square-error (RMSE). As such, this tool takes a RMSE input and generates the histogram internally. This is more convienent for most applications but loses the flexibility of specifying the error distribution more completely.

  2. The Whitebox GAT tool generated the error fields using the turning bands method. This tool generates a random Gaussian error field with no spatial autocorrelation and then applies local spatial averaging using a Gaussian filter (the size of which depends of the error autocorrelation length input) to increase the level of autocorrelation. We use the Fast Almost Gaussian Filter of Peter Kovesi (2010), which uses five repeat passes of a mean filter, based on an integral image. This filter method is highly efficient. This results in a significant performance increase compared with the original tool.

  3. Parts of the tool's workflow utilize parallel processing. However, the depression filling operation, which is the most time-consuming part of the workflow, is not parallelized.

In addition to the input DEM (dem) and output pdep file name (output), the user must specify the nature of the error model, including the root-mean-square error (rmse) and the error field correlation length (range, in map units). These parameters determine the statistical frequency distribution and spatial characteristics of the modeled error fields added to the DEM in each iteration of the simulation. The user must also specify the number of iterations (iterations). A larger number of iterations will produce a smoother pdep raster.

This tool creates several temporary rasters in memory and, as a result, is very memory hungry. This will necessarily limit the size of DEMs that can be processed on more memory-constrained systems. As a rough guide for usage, the computer system will need 6-10 times more memory than the file size of the DEM. If your computer possesses insufficient memory, you may consider splitting the input DEM apart into smaller tiles.

For a video demonstrating the application of the stochastic_depression_analysis tool, see this YouTube video.

Reference

Lindsay, J. B., & Creed, I. F. (2005). Sensitivity of digital landscapes to artifact depressions in remotely-sensed DEMs. Photogrammetric Engineering & Remote Sensing, 71(9), 1029-1036.

See Also

impoundment_size_index, fast_almost_gaussian_filter

Function Signature

def stochastic_depression_analysis(self, dem: Raster, rmse: float, range: float, iterations: int = 100) -> Raster: ...

strahler_order_basins

This tool will identify the catchment areas of each Horton-Strahler stream order link in a user-specified stream network (streams), i.e. the network's Strahler basins. The tool effectively performs a Horton-Strahler stream ordering operation (horton_stream_order) followed by by a watershed operation. The user must specify the name of a flow pointer (flow direction) raster (d8_pntr), a streams raster (streams), and the output raster (output). The flow pointer and streams rasters should be generated using the d8_pointer algorithm. This will require a depressionless DEM, processed using either the breach_depressions_least_cost or fill_depressions tool.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the esri_pntr parameter must be specified.

NoData values in the input flow pointer raster are assigned NoData values in the output image.

See Also

horton_stream_order, watershed, d8_pointer, breach_depressions_least_cost, fill_depressions

Function Signature

def strahler_order_basins(self, d8_pointer: Raster, streams: Raster, esri_pntr: bool = False) -> Raster: ...

strahler_stream_order

This tool can be used to assign the Strahler stream order to each link in a stream network. Stream ordering is often used in hydro-geomorphic and ecological studies to quantify the relative size and importance of a stream segment to the overall river system. There are several competing stream ordering schemes. Based on to this common stream numbering system, headwater stream links are assigned an order of one. Stream order only increases downstream when two links of equal order join, otherwise the downstream link is assigned the larger of the two link orders.

Strahler order and Horton order are similar approaches to assigning stream network hierarchy. Horton stream order essentially starts with the Strahler order scheme, but subsequently replaces each of the assigned stream order value along the main trunk of the network with the order value of the outlet. The main channel is not treated differently compared with other tributaries in the Strahler ordering scheme.

The user must input a streams raster image (streams_raster) and D8 pointer (flow direction) image (d8_pntr). Stream cells are designated in the streams image as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer image is used to traverse the stream network and should only be created using the D8 algorithm (d8_pointer). Background cells will be assigned the NoData value in the output image, unless the user specifies zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the user should specify esri_pntr=True.

Reference Strahler, A. N. (1957). Quantitative analysis of watershed

geomorphology. Eos, Transactions American Geophysical Union, 38(6), 913-920.

See Also

horton_stream_order, hack_stream_order, shreve_stream_magnitude, topological_stream_order

Function Signature

def strahler_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

This tool identifies all interior and exterior links, and source, link, and sink nodes in an input stream network (streams_raster). The input streams raster is used to designate which grid cells contain a stream and the pointer image is used to traverse the stream network. Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless zero_background=True, in which case non-stream cells will be assigned zero values in the output.

Each feature is assigned the following identifier in the output image:

ValueStream Type
1Exterior Link
2Interior Link
3Source Node (head water)
4Link Node
5Sink Node

The user must input an input stream raster (streams_raster) and a pointer (flow direction) raster (d8_pntr). The flow pointer and streams rasters should be generated using the d8_pointer algorithm. This will require a depressionless DEM, processed using either the breach_depressions_least_cost or fill_depressions tools.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pntr=True.

See Also

stream_link_identifier

Function Signature

def stream_link_class(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

This tool can be used to assign each link in a stream network a unique numeric identifier. This grid is used by a number of other stream network analysis tools.

The input streams raster (streams_raster) is used to designate which grid cells contain a stream and the pointer image is used to traverse the stream network. Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless the user specifies zero_background=True, in which case non-stream cells will be assigned zero values in the output.

The user must specify the name of a flow pointer (flow direction) raster (d8_pntr) and a streams raster (streams_raster). The flow pointer and streams rasters should be generated using the d8_pointer algorithm. This will require a depressionless DEM, processed using either the breach_depressions_least_cost or fill_depressions tool.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pntr=True.

See Also

d8_pointer, tributary_identifier, breach_depressions_least_cost, fill_depressions

Function Signature

def stream_link_identifier(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

This tool can be used to measure the length of each link in a stream network. The user must input a stream link ID raster (streams_id_raster), created using the stream_link_identifier tool, and D8 pointer raster (d8_pointer). The flow pointer raster is used to traverse the stream network and should only be created using the d8_pointer algorithm. Stream cells are designated in the stream link ID raster as all non-zero, positive values. Background cells will be assigned the NoData value in the output image, unless zero_background=True, in which case non-stream cells will be assigned zero values in the output.

See Also

stream_link_identifier, d8_pointer, stream_link_slope

Function Signature

def stream_link_length(self, d8_pointer: Raster, streams_id_raster: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

This tool can be used to measure the average slope gradient, in degrees, of each link in a raster stream network. To estimate the slope of individual grid cells in a raster stream network, use the stream_slope_continuous tool instead. The user must input a stream link identifier raster image (streams_id_raster), a D8 pointer image (d8_pointer), and a digital elevation model (dem). The pointer image is used to traverse the stream network and must only be created using the D8 algorithm (d8_pointer). Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pointer=True.

See Also

stream_slope_continuous, d8_pointer

Function Signature

def stream_link_slope(self, d8_pointer: Raster, streams_id_raster: Raster, dem: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

stream_slope_continuous

This tool can be used to measure the slope gradient, in degrees, each grid cell in a raster stream network. To estimate the average slope for each link in a stream network, use the stream_link_slope tool instead. The user must input a stream raster image (streams_raster), a D8 pointer image (d8_pointer), and a digital elevation model (dem). The pointer image is used to traverse the stream network and must only be created using the D8 algorithm (d8_pointer). Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pointer=True.

See Also

stream_link_slope, d8_pointer

Function Signature

def stream_slope_continuous(self, d8_pointer: Raster, streams_raster: Raster, dem: Raster, esri_pointer: bool = False, zero_background: bool = False) -> Raster: ...

subbasins

This tool will identify the catchment areas to each link in a user-specified stream network, i.e. the network's sub-basins. subbasins effectively performs a stream link ID operation (stream_link_identifier) followed by a watershed operation. The user must specify the name of a flow pointer (flow direction) raster (d8_pntr), a streams raster (streams), and the output raster (output). The flow pointer and streams rasters should be generated using the d8_pointer algorithm. This will require a depressionless DEM, processed using either the breach_depressions_least_cost or fill_depressions tool.

hillslopes are conceptually similar to sub-basins, except that sub-basins do not distinguish between the right-bank and left-bank catchment areas of stream links. The Sub-basins tool simply assigns a unique identifier to each stream link in a stream network.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, the esri_pntr parameter must be specified.

NoData values in the input flow pointer raster are assigned NoData values in the output image.

See Also

stream_link_identifier, watershed, hillslopes, d8_pointer, breach_depressions_least_cost, fill_depressions

Function Signature

def subbasins(self, d8_pntr: Raster, streams: Raster, esri_pntr: bool = False) -> Raster: ...

sum_overlay

This tool calculates the sum for each grid cell from a group of raster images (inputs). NoData values in any of the input images will result in a NoData pixel in the output image (output).

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

weighted_sum, multiply_overlay

Function Signature

def sum_overlay(self, input_rasters: List[Raster]) -> Raster: ...

surface_area_ratio

This tool calculates the ratio between the surface area and planar area of grid cells within digital elevation models (DEMs). The tool uses the method of Jenness (2004) to estimate the surface area of a DEM grid cell based on the elevations contained within the 3 x 3 neighbourhood surrounding each cell. The surface area ratio has a lower bound of 1.0 for perfectly flat grid cells and is greater than 1.0 for other conditions. In particular, surface area ratio is a measure of neighbourhood surface shape complexity (texture) and elevation variability (local slope).

Reference

Jenness, J. S. (2004). Calculating landscape surface area from digital elevation models. Wildlife Society Bulletin, 32(3), 829-839.

See Also

ruggedness_index, multiscale_roughness, circular_variance_of_aspect, edge_density

Function Signature

def surface_area_ratio(self, dem: Raster) -> Raster: ...

symmetrical_difference

This tool will remove all the overlapping features, or parts of overlapping features, between input and overlay vector files, outputting only the features that occur in one of the two inputs but not both. The Symmetrical Difference is related to the Boolean exclusive-or (XOR) operation in set theory and is one of the common vector overlay operations in GIS. The user must specify the names of the input and overlay vector files as well as the output vector file name. The tool operates on vector points, lines, or polygon, but both the input and overlay files must contain the same VectorGeometryType.

The Symmetrical Difference can also be derived using a combination of other vector overlay operations, as either (A union B) difference (A intersect B), or (A difference B) union (B difference A).

The attributes of the two input vectors will be merged in the output attribute table. Fields that are duplicated between the inputs will share a single attribute in the output. Fields that only exist in one of the two inputs will be populated by null in the output table. Multipoint VectorGeometryTypes however will simply contain a single output feature identifier (FID) attribute. Also, note that depending on the VectorGeometryType (polylines and polygons), Measure and Z ShapeDimension data will not be transferred to the output geometries. If the input attribute table contains fields that measure the geometric properties of their associated features (e.g. length or area), these fields will not be updated to reflect changes in geometry shape and size resulting from the overlay operation.

See Also

intersect, difference, union, clip, erase

Function Signature

def symmetrical_difference(self, input: Vector, overlay: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

tangential_curvature

This tool calculates the tangential curvature, which is the curvature of an inclined plan perpendicular to both the direction of flow and the surface (Gallant and Wilson, 2000). Curvature is a second derivative of the topographic surface defined by a digital elevation model (DEM). The user must input a DEM (dem). The output reports curvature in degrees multiplied by 100 for easier interpretation, as curvature values are often very small. The Z Conversion Factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. If the DEM is in the geographic coordinate system (latitude and longitude), with XY units measured in degrees, an appropriate Z Conversion Factor is calculated internally based on site latitude.

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

plan_curvature, profile_curvature, total_curvature, slope, aspect

Function Signature

def tangential_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

thicken_raster_line

This image processing tool can be used to thicken single-cell wide lines within a raster file along diagonal sections of the lines. Because of the limitation of the raster data format, single-cell wide raster lines can be traversed along diagonal sections without passing through a line grid cell. This causes problems for various raster analysis functions for which lines are intended to be barriers. This tool will thicken raster lines, such that it is impossible to cross a line without passing through a line grid cell. While this can also be achieved using a maximum filter, unlike the filter approach, this tool will result in the smallest possible thickening to achieve the desired result.

All non-zero, positive values are considered to be foreground pixels while all zero valued cells or NoData cells are considered background pixels.

Note: Unlike other filter-based operations in WhiteboxTools, this algorithm can't easily be parallelized because the output raster must be read and written to during the same loop.

See Also

line_thinning

Function Signature

def thicken_raster_line(self, raster: Raster) -> Raster: ...

time_in_daylight

This tool calculates the proportion of time a location is within daylight. That is, it calculates the proportion of time, during a user-defined time frame, that a grid cell in an input digital elevation model (dem) is outside of an area of shadow cast by a local object. The input DEM should truly be a digital surface model (DSM) that contains significant off-terrain objects. Such a model, for example, could be created using the first-return points of a LiDAR data set, or using the lidar_digital_surface_model tool.

The tool operates by calculating a solar almanac, which estimates the sun's position for the location, in latitude and longitude coordinate (lat, long), of the input DSM. The algorithm then calculates horizon angle (see horizon_angle) rasters from the DSM based on the user-specified azimuth fraction (az_fraction). For example, if an azimuth fraction of 15-degrees is specified, horizon angle rasters could be calculated for the solar azimuths 0, 15, 30, 45... In reality, horizon angle rasters are only calculated for azimuths for which the sun is above the horizon for some time during the tested time period. A horizon angle raster evaluates the vertical angle between each grid cell in a DSM and a distant obstacle (e.g. a mountain ridge, building, tree, etc.) that blocks the view along a specified direction. In calculating horizon angle, the user must specify the maximum search distance (max_dist) beyond which the query for higher, more distant objects will cease. This parameter strongly impacts the performance of the tool, with larger values resulting in significantly longer run-times. Users are advised to set the max_dist based on the maximum shadow length expected in an area. For example, in a relatively flat urban landscape, the tallest building will likely determine the longest shadow lengths. All grid cells for which the calculated solar positions throughout the time frame are higher than the cell's horizon angle are deemed to be illuminated during the time the sun is in the corresponding azimuth fraction.

By default, the tool calculates time-in-daylight for a time-frame spanning an entire year. That is, the solar almanac is calculated for each hour, at 10-second intervals, and for each day of the year. Users may alternatively restrict the time of year over which time-in-daylight is calculated by specifying a starting day (1-365; start_day) and ending day (1-365; end_day). Similarly, by specifying start time (start_time) and end time (end_time) parameters, the user is able to measure time-in-daylight for specific ranges of the day (e.g. for the morning or afternoon hours). These time parameters must be specified in 24-hour time (HH:MM:SS), e.g. 15:30:00. sunrise and sunset are also acceptable inputs for the start time and end time respectively. The timing of sunrise and sunset on each day in the tested time-frame will be determined using the solar almanac.

See Also

lidar_digital_surface_model, horizon_angle

Function Signature

def time_in_daylight(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), latitude: float = 0.0, longitude: float = 0.0, utc_offset_str: str = "UTC+00:00", start_day: int = 1, end_day: int = 365, start_time: str = "sunrise", end_time: str = "sunset") -> Raster: ...

tin_interpolation

Creates a raster grid based on a triangular irregular network (TIN) fitted to vector points and linear interpolation within each triangular-shaped plane. The TIN creation algorithm is based on Delaunay triangulation.

The user must specify the attribute field containing point values (field). Alternatively, if the input Shapefile contains z-values, the interpolation may be based on these values (use_z). Either an output grid resolution (cell_size) must be specified or alternatively an existing base file (base) can be used to determine the output raster's (output) resolution and spatial extent. Natural neighbour interpolation generally produces a satisfactorily smooth surface within the region of data points but can produce spurious breaks in the surface outside of this region. Thus, it is recommended that the output surface be clipped to the convex hull of the input points (clip).

See Also

lidar_tin_gridding, construct_vector_tin, natural_neighbour_interpolation

Function Signature

def tin_interpolation(self, points: Vector, field_name: str = "FID", use_z: bool = False, cell_size: float = 0.0, base_raster: Raster = None, max_triangle_edge_length: float = float('inf')) -> Raster: ...

tophat_transform

This tool performs either a white or black top-hat transform on an input image. A top-hat transform is a common digital image processing operation used for various tasks, such as feature extraction, background equalization, and image enhancement. The size of the rectangular structuring element used in the filtering can be specified using the filterx and filtery flags.

There are two distinct types of top-hat transform including white and black top-hat transforms. The white top-hat transform is defined as the difference between the input image and its opening by some structuring element. An opening operation is the dilation (maximum filter) of an erosion (minimum filter) image. The black top-hat transform, by comparison, is defined as the difference between the closing and the input image. The user specifies which of the two flavours of top-hat transform the tool should perform by specifying either 'white' or 'black' with the variant flag.

See Also:

closing, opening, maximum_filter, minimum_filter

Function Signature

def tophat_transform(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11, variant: str = "white") -> Raster: ...

topographic_hachures

Description

This tool can be used to create a vector contour coverage from an input raster surface model (input), such as a digital elevation model (DEM). The user must specify the contour interval (interval) and optionally, the base contour value (base). The degree to which contours are smoothed is controlled by the Smoothing Filter Size parameter (smooth). This value, which determines the size of a mean filter applied to the x-y position of vertices in each contour, should be an odd integer value, e.g. 3, 5, 7, 9, 11, etc. Larger values will result in smoother contour lines. The tolerance parameter (tolerance) controls the amount of line generalization. That is, vertices in a contour line will be selectively removed from the line if they do not result in an angular deflection in the line's path of at least this threshold value. Increasing this value can significantly decrease the size of the output contour vector file, at the cost of generating straighter contour line segments.

Function Signature

def topographic_hachures(self, dem: Raster, contour_interval = 10.0, base_contour = 0.0, deflection_tolerance = 10.0, filter_size = 9, separation = 2.0, distmin = 0.5, distmax = 2.0, discretization = 0.5, turnmax = 45.0, slopemin = 0.5, depth = 16) -> Vector: ...

See Also

contours_from_raster, raster_to_vector_polygons

topological_stream_order

This tool can be used to assign the topological stream order to each link in a stream network. According to this stream numbering system, the link directly draining to the outlet is assigned an order of one. Each of the two tributaries draining to the order-one link are assigned an order of two, and so on until the most distant link from the catchment outlet has been assigned an order. The topological order can therefore be thought of as a measure of the topological distance of each link in the network to the catchment outlet and is likely to be related to travel time.

The user must input a streams raster image (streams_raster) and D8 pointer image (d8_pntr). Stream cells are designated in the streams image as all positive, nonzero values. Thus all non-stream or background grid cells are commonly assigned either zeros or NoData values. The pointer image is used to traverse the stream network and should only be created using the D8 algorithm. Background cells will be assigned the NoData value in the output image, unless the zero_background=True, in which case non-stream cells will be assigned zero values in the output.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pntr=True.

See Also

hack_stream_order, horton_stream_order, strahler_stream_order, shreve_stream_magnitude

Function Signature

def topological_stream_order(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

total_curvature

This tool calculates the total curvature, which measures the curvature of the topographic surface rather than the curvature of a line across the surface in some direction (Gallant and Wilson, 2000). Total curvature can be positive or negative, with zero curvature indicating that the surface is either flat or the convexity in one direction is balanced by the concavity in another direction, as would occur at a saddle point. Curvature is a second derivative of the topographic surface defined by a digital elevation model (DEM). The user must input a DEM (dem).The output reports curvature in degrees multiplied by 100 for easier interpretation, as curvature values are often very small. The Z Conversion Factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. If the DEM is in the geographic coordinate system (latitude and longitude), with XY units measured in degrees, an appropriate Z Conversion Factor is calculated internally based on site latitude.

Reference

Gallant, J. C., and J. P. Wilson, 2000, Primary topographic attributes, in Terrain Analysis: Principles and Applications, edited by J. P. Wilson and J. C. Gallant pp. 51-86, John Wiley, Hoboken, N.J.

plan_curvature, profile_curvature, tangential_curvature, slope, aspect

Function Signature

def total_curvature(self, dem: Raster, log_transform: bool = False, z_factor: float = 1.0) -> Raster: ...

total_filter

This tool performs a total filter on an input image. A total filter assigns to each cell in the output grid the total (sum) of all values in a moving window centred on each grid cell.

Neighbourhood size, or filter size, is specified in the x and y dimensions using the filterx and filtery flags. These dimensions should be odd, positive integer values (e.g. 3, 5, 7, 9, etc.).

See Also

range_filter

Function Signature

def total_filter(self, raster: Raster, filter_size_x: int = 11, filter_size_y: int = 11) -> Raster: ...

trace_downslope_flowpaths

This tool can be used to mark the flowpath initiated from user-specified locations downslope and terminating at either the grid's edge or a grid cell with undefined flow direction. The user must input the name of a D8 flow pointer grid (d8_pntr) and an input vector file indicating the location of one or more initiation points, i.e. 'seed points' (seed_pts). The seed point file must be a vector of the POINT VectorGeometryType. Note that the flow pointer should be generated from a DEM that has been processed to remove all topographic depression (see breach_depressions_least_cost and fill_depressions) and created using the D8 flow algorithm (d8_pointer).

See Also

d8_pointer, breach_depressions_least_cost, fill_depressions, downslope_flowpath_length, downslope_distance_to_stream

Function Signature

def trace_downslope_flowpaths(self, seed_points: Vector, d8_pointer: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

travelling_salesman_problem

This tool finds approximate solutions to travelling salesman problems, the goal of which is to identify the shortest route connecting a set of locations. The tool uses an algorithm that applies a 2-opt heuristic and a 3-opt heuristic as a fall-back if the initial approach takes too long. The user must specify the names of the input points vector (input) and output lines vector file (output), as well as the duration, in seconds, over which the algorithm is allowed to search for improved solutions (duration). The tool works in parallel to find more optimal solutions.

Function Signature

def travelling_salesman_problem(self, input: Vector, duration: int = 60) -> Vector: ...

trend_surface

This tool can be used to interpolate a trend surface from a raster image. The technique uses a polynomial, least-squares regression analysis. The user must specify the name of the input raster file. In addition, the user must specify the polynomial order (1 to 10) for the analysis. A first-order polynomial is a planar surface with no curvature. As the polynomial order is increased, greater flexibility is allowed in the fitted surface. Although polynomial orders as high as 10 are accepted, numerical instability in the analysis often creates artifacts in trend surfaces of orders greater than 5. The operation will display a text report on completion, in addition to the output raster image. The report will list each of the coefficient values and the r-square value. Note that the entire raster image must be able to fit into computer memory, limiting the use of this tool to relatively small rasters. The Trend Surface (Vector Points) tool can be used instead if the input data is vector points contained in a shapefile.

Numerical stability is enhanced by transforming the x, y, z data by their minimum values before performing the regression analysis. These transform parameters are also reported in the output report.

Function Signature

def trend_surface(self, raster: Raster, output_html_file: str, polynomial_order: int = 1) -> Raster: ...

trend_surface_vector_points

This tool can be used to interpolate a trend surface from a vector points file. The technique uses a polynomial, least-squares regression analysis. The user must specify the name of the input shapefile, which must be of a 'Points' base VectorGeometryType and select the attribute in the shapefile's associated attribute table for which to base the trend surface analysis. The attribute must be numerical. In addition, the user must specify the polynomial order (1 to 10) for the analysis. A first-order polynomial is a planar surface with no curvature. As the polynomial order is increased, greater flexibility is allowed in the fitted surface. Although polynomial orders as high as 10 are accepted, numerical instability in the analysis often creates artifacts in trend surfaces of orders greater than 5. The operation will display a text report on completion, in addition to the output raster image. The report will list each of the coefficient values and the r-square value. The Trend Surface tool can be used instead if the input data is a raster image.

Numerical stability is enhanced by transforming the x, y, z data by their minimum values before performing the regression analysis. These transform parameters are also reported in the output report.

Function Signature

def trend_surface_vector_points(self, input: Vector, cell_size: float, output_html_file: str, field_name: str = "FID", polynomial_order: int = 1) -> Raster: ...

tributary_identifier

This tool can be used to assigns a unique identifier to each tributary in a stream network. A tributary is a section of a stream network extending from a channel head downstream to a confluence with a larger stream. Relative stream size is estimated using stream length as a surrogate. Tributaries therefore extend from channel heads downstream until a confluence is encountered in which the intersecting stream is longer, or an outlet cell is detected.

The input streams raster (streams_raster) is used to designate which grid cells contain a stream and the pointer image is used to traverse the stream network. Stream cells are designated in the streams image as all values greater than zero. Thus, all non-stream or background grid cells are commonly assigned either zeros or NoData values. Background cells will be assigned the NoData value in the output image, unless zero_background=True, in which case non-stream cells will be assigned zero values in the output.

The user must specify the name of a flow pointer (flow direction) raster (d8_pntr) and a streams raster (streams_raster). The flow pointer and streams rasters should be generated using the d8_pointer algorithm. This will require a depressionless DEM, processed using either the breach_depressions_least_cost or fill_depressions tool. flow direction) raster, and the output raster.

By default, the pointer raster is assumed to use the clockwise indexing method used by WhiteboxTools. If the pointer file contains ESRI flow direction values instead, set esri_pntr=True.

See Also

d8_pointer, stream_link_identifier, breach_depressions_least_cost, fill_depressions

Function Signature

def tributary_identifier(self, d8_pntr: Raster, streams_raster: Raster, esri_pntr: bool = False, zero_background: bool = False) -> Raster: ...

turning_bands_simulation

This tool can be used to create a random field using the turning bands algorithm. The user must specify the name of a base raster image (base) from which the output raster will derive its geographical information, dimensions (rows and columns), and other information. In addition, the range (range), in x-y units, must be specified. The range determines the correlation length of the resulting field. For a good description of how the algorithm works, see Carr (2002). The turning bands method creates a number of 1-D simulations (called bands) and fuses these together to create a 2-D error field. There is no natural stopping condition in this process, so the user must specify the number of bands to create (iterations). The default value of 1000 iterations is reasonable. The fewer iterations used, the more prevalent the 1-D simulations will be in the output error image, effectively creating artifacts. Run time increases with the number of iterations.

Turning bands simulation is a commonly applied technique in Monte Carlo style simulations of uncertainty. As such, it is frequently run many times during a simulation (often 1000s of times). When this is the case, algorithm performance and efficiency are key considerations. One alternative method to efficiently generate spatially autocorrelated random fields is to apply the fast_almost_gaussian_filter tool to the output of the random_field tool. This can be used to generate a random field with the desired spatial characteristics and frequency distribution. This is the alternative approach used by the stochastic_depression_analysis tool.

Reference

Carr, J. R. (2002). Data visualization in the geosciences. Upper Saddle River, NJ: Prentice Hall. pp. 267.

See Also

random_field, fast_almost_gaussian_filter, stochastic_depression_analysis

Function Signature

def turning_bands_simulation(self, base_raster: Raster = None, range: float = 1.0, iterations: int = 1000) -> Raster: ...

two_sample_ks_test

This tool will perform a two-sample Kolmogorov-Smirnov (K-S) test to evaluate whether a significant statistical difference exists between the frequency distributions of two rasters. The null hypothesis is that both samples come from a population with the same distribution. Note that this test evaluates the two input rasters for differences in their overall distribution shape, with no assumption of normality. If there is need to compare the per-pixel differences between two input rasters, a paired-samples test such as the paired_sample_t_test or the non-parametric wilcoxon_signed_rank_test should be used instead.

The user must specify the name of the two input raster images (input1 and input2) and the output report HTML file (output). The test can be performed optionally on the entire image or on a random sub-sample of pixel values of a user-specified size (num_samples). In evaluating the significance of the test, it is important to keep in mind that given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant. Furthermore statistical significance says nothing about the practical significance of a difference.

See Also

KSTestForNormality, paired_sample_t_test, wilcoxon_signed_rank_test

Function Signature

def two_sample_ks_test(self, raster1: Raster, raster2: Raster, output_html_file: str, num_samples: int) -> None: ...

union

This tool splits vector layers at their overlaps, creating a layer containing all the portions from both input and overlay layers. The Union is related to the Boolean OR operation in set theory and is one of the common vector overlay operations in GIS. The user must specify the names of the input and overlay vector files as well as the output vector file name. The tool operates on vector points, lines, or polygon, but both the input and overlay files must contain the same VectorGeometryType.

The attributes of the two input vectors will be merged in the output attribute table. Fields that are duplicated between the inputs will share a single attribute in the output. Fields that only exist in one of the two inputs will be populated by null in the output table. Multipoint VectorGeometryTypes however will simply contain a single output feature identifier (FID) attribute. Also, note that depending on the VectorGeometryType (polylines and polygons), Measure and Z ShapeDimension data will not be transferred to the output geometries. If the input attribute table contains fields that measure the geometric properties of their associated features (e.g. length or area), these fields will not be updated to reflect changes in geometry shape and size resulting from the overlay operation.

See Also

intersect, difference, symmetrical_difference, clip, erase

Function Signature

def union(self, input: Vector, overlay: Vector, snap_tolerance: float = 2.220446049250313e-16) -> Vector: ...

unnest_basins

In some applications it is necessary to relate a measured variable for a group of hydrometric stations (e.g. characteristics of flow timing and duration or water chemistry) to some characteristics of each outlet's catchment (e.g. mean slope, area of wetlands, etc.). When the group of outlets are nested, i.e. some stations are located downstream of others, then performing a watershed operation will result in inappropriate watershed delineation. In particular, the delineated watersheds of each nested outlet will not include the catchment areas of upstream outlets. This creates a serious problem for this type of application.

The Unnest Basin tool can be used to perform a watershedding operation based on a group of specified pour points, i.e. outlets or target cells, such that each complete watershed is delineated. The user must specify the name of a flow pointer (flow direction) raster, a pour point raster, and the name of the output rasters. Multiple numbered outputs will be created, one for each nesting level. Pour point, or target, cells are denoted in the input pour-point image as any non-zero, non-NoData value. The flow pointer raster should be generated using the D8 algorithm.

Function Signature

def unnest_basins(self, d8_pointer: Raster, pour_points: Vector, esri_pntr: bool = False) -> List[Raster]: ...

unsharp_masking

Unsharp masking is an image edge-sharpening technique commonly applied in digital image processing. Admittedly, the name 'unsharp' seems somewhat counter-intuitive given the purpose of the filter, which is to enchance the definition of edge features within the input image (input). This name comes from the use of a blurred, or unsharpened, intermediate image (mask) in the process. The blurred image is combined with the positive (original) image, creating an image that exhibits enhanced feature definition. A caution is needed in that the output image, although clearer, may be a less accurate representation of the image's subject. The output may also contain more speckle than the input image.

In addition to the input (input) and output image files, the user must specify the values of three parameters: the standard deviation distance (sigma), which is a measure of the filter size in pixels, the amount (amount), a percentage value that controls the magnitude of each overshoot at edges, and lastly, the threshold (threshold), which controls the minimal brightness change that will be sharpened. Pixels with values differ after the calculation of the filter by less than the threshold are unmodified in the output image.

unsharp_masking works with both greyscale and red-green-blue (RGB) colour images. RGB images are decomposed into intensity-hue-saturation (IHS) and the filter is applied to the intensity channel. Importantly, the intensity values range from 0-1, which is important when setting the threshold value for colour images. NoData values in the input image are ignored during processing.

See Also

gaussian_filter, high_pass_filter

Function Signature

def unsharp_masking(self, raster: Raster, sigma: float = 0.75, amount: float = 100.0, threshold: float = 0.0) -> Raster: ...

update_nodata_cells

This tool will assign the NoData valued cells in an input raster (input1) the values contained in the corresponding grid cells in a second input raster (input2). This operation is sometimes necessary because most other overlay operations exclude areas of NoData values from the analysis. This tool can be used when there is need to update the values of a raster within these missing data areas.

See Also

IsNodata

Function Signature

def update_nodata_cells(self, input1: Raster, input2: Raster) -> Raster: ...

upslope_depression_storage

This tool estimates the average upslope depression storage depth using the FD8 flow algorithm. The input DEM (dem) need not be hydrologically corrected; the tool will internally map depression storage and resolve flowpaths using depression filling. This input elevation model should be of a fine resolution (< 2 m), and is ideally derived using LiDAR. The tool calculates the total upslope depth of depression storage, which is divided by the number of upslope cells in the final step of the process, yielding the average upslope depression depth. Roughened surfaces tend to have higher values compared with smoothed surfaces. Values, particularly on hillslopes, may be very small (< 0.01 m).

See Also

FD8FlowAccumulation, fill_depressions, depth_in_sink

Function Signature

def upslope_depression_storage(self, dem: Raster) -> Raster: ...

user_defined_weights_filter

NoData values in the input image are ignored during the convolution operation. This can lead to unexpected behavior at the edges of images (since the default behavior is to return NoData when addressing cells beyond the grid edge) and where the grid contains interior areas of NoData values. Normalization of kernel weights can be useful for handling the edge effects associated with interior areas of NoData values. When the normalization option is selected, the sum of the cell value-weight product is divided by the sum of the weights on a cell-by-cell basis. Therefore, if the kernel at a particular grid cell contains neighboring cells of NoData values, normalization effectively re-adjusts the weighting to account for the missing data values. Normalization also ensures that the output image will possess values within the range of the input image and allows the user to specify integer value weights in the kernel. However, note that this implies that the sum of weights should equal one. In some cases, alternative sums (e.g. zero) are more appropriate, and as such normalization should not be applied in these cases.

Function Signature

def user_defined_weights_filter(self, raster: Raster, weights: List[List[float]], kernel_center: str = "center", normalize_weights: bool = False) -> Raster: ...

vector_hex_binning

The practice of binning point data to form a type of 2D histogram, density plot, or what is sometimes called a heatmap, is quite useful as an alternative for the cartographic display of of very dense points sets. This is particularly the case when the points experience significant overlap at the displayed scale. The PointDensity tool can be used to perform binning based on a regular grid (raster output). This tool, by comparison, bases the binning on a hexagonal grid.

The tool is similar to the CreateHexagonalVectorGrid tool, however instead will create an output hexagonal grid in which each hexagonal cell possesses a COUNT attribute which specifies the number of points from an input points file (Shapefile vector) that are contained within the hexagonal cell.

In addition to the names of the input points file and the output Shapefile, the user must also specify the desired hexagon width (w), which is the distance between opposing sides of each hexagon. The size (s) each side of the hexagon can then be calculated as, s = w / [2 x cos(PI / 6)]. The area of each hexagon (A) is, A = 3s(w / 2). The user must also specify the orientation of the grid with options of horizontal (pointy side up) and vertical (flat side up).

See Also

LidarHexBinning, PointDensity, CreateHexagonalVectorGrid

Function Signature

def vector_hex_binning(self, vector_points: Vector, width: float, orientation: str = "h") -> Vector: ...

vector_lines_to_raster

This tool can be used to convert a vector lines or polygon file into a raster grid of lines. If a vector of one of the polygon VectorGeometryTypes is selected, the resulting raster will outline the polygons without filling these features. Use the VectorPolygonToRaster tool if you need to fill the polygon features.

The user must specify the name of the input vector (input) and the output raster file (output). The Field Name (field) is the field from the attributes table, from which the tool will retrieve the information to assign to grid cells in the output raster. Note that if this field contains numerical data with no decimals, the output raster data type will be INTEGER; if it contains decimals it will be of a FLOAT data type. The field must contain numerical data. If the user does not supply a Field Name parameter, each feature in the raster will be assigned the record number of the feature. The assignment operation determines how the situation of multiple points contained within the same grid cell is handled. The background value is the value that is assigned to grid cells in the output raster that do not correspond to the location of any points in the input vector. This value can be any numerical value (e.g. 0) or the string 'NoData', which is the default.

If the user optionally specifies the cell_size parameter then the coordinates will be determined by the input vector (i.e. the bounding box) and the specified Cell Size. This will also determine the number of rows and columns in the output raster. If the user instead specifies the optional base raster file parameter (base), the output raster's coordinates (i.e. north, south, east, west) and row and column count will be the same as the base file. If the user does not specify either of these two optional parameters, the tool will determine the cell size automatically as the maximum of the north-south extent (determined from the shapefile's bounding box) or the east-west extent divided by 500.

See Also

vector_points_to_raster, vector_polygons_to_raster

Function Signature

def vector_lines_to_raster(self, input: Vector, field_name: str = "FID", zero_background: bool = False, cell_size: float = 0.0, base_raster: Raster = None) -> Raster: ...

vector_points_to_raster

This tool can be used to convert a vector points file into a raster grid. The user must specify the name of the input vector and the output raster file. The field name (field) is the field from the attributes table from which the tool will retrieve the information to assign to grid cells in the output raster. The field must contain numerical data. If the user does not supply a field name parameter, each feature in the raster will be assigned the record number of the feature. The assignment operation determines how the situation of multiple points contained within the same grid cell is handled. The background value is zero by default but can be set to NoData optionally using the nodata value.

If the user optionally specifies the grid cell size parameter (cell_size) then the coordinates will be determined by the input vector (i.e. the bounding box) and the specified cell size. This will also determine the number of rows and columns in the output raster. If the user instead specifies the optional base raster file parameter (base), the output raster's coordinates (i.e. north, south, east, west) and row and column count will be the same as the base file.

In the case that multiple points are contained within a single grid cell, the output can be assigned (assign) the first, last (default), min, max, sum, mean, or number of the contained points.

See Also

vector_polygons_to_raster, vector_lines_to_raster

Function Signature

def vector_points_to_raster(self, input: Vector, field_name: str = "FID", assign_op: str = "last", zero_background: bool = False, cell_size: float = 0.0, base_raster: Raster = None) -> Raster: ...

vector_polygons_to_raster

public constructor

Function Signature

def vector_polygons_to_raster(self, input: Vector, field_name: str = "FID", zero_background: bool = False, cell_size: float = 0.0, base_raster: Raster = None) -> Raster: ...

vector_stream_network_analysis

This tool performs common stream network analysis operations on an input vector stream file (streams). The network indices produced by this analysis are contained within the output vector's (output) attribute table. The following table shows each of the network indices that are calculated.

Index NameDescription
OUTLETUnique outlet identifying value, used as basin identifier
TRIB_IDUnique tributary identifying value
DIST2MOUTHDistance to outlet (i.e., mouth node)
DS_NODESNumber of downstream nodes
TUCLTotal upstream channel length; the channel equivalent to catchment area
MAXUPSDISTMaximum upstream distance
HORTONHorton stream order
STRAHLERStrahler stream order
SHREVEShreve stream magnitude
HACKHack stream order
MAINSTREAMBoolean value indicating whether link is the main stream trunk of its basin
MIN_ELEVMinimum link elevation (from DEM)
MAX_ELEVMaximum link elevation (from DEM)
IS_OUTLETBoolean value indicating whether link is an outlet link

In addition to the input and output files, the user must also specify the name of an input DEM file (dem), the maximum ridge-cutting height, in DEM z units (cutting_height), and the snap distance used for identifying any topological errors in the stream file (snap). The main function of the input DEM is to distinguish between outlet and headwater links in the network, which can be differentiated by their elevations during the priority-flood operation used in the algorithm (see Lindsay et al. 2019). The maximum ridge-cutting height parameter is useful for preventing erroneous stream capture in the headwaters when channel heads are very near (within the sanp distance), which is usually very rare. The snap distance parameter is used to deal with certain common topological errors. However, it is advisable that the input streams file be pre-processed prior to analysis.

Note: The input streams file for this tool should be pre-processed using the repair_stream_vector_topology tool. This is an important step.

OUTLET:

HORTON:

SHREVE:

TRIB_ID:

Many of the network indices output by this tool for vector streams have raster equivalents in WhiteboxTools. For example, see the strahler_stream_order, shreve_stream_magnitude tools.

Tool outputs are: stream lines vector, confluences points vector, outlet points vector, and channel head points vector.

Reference

Lindsay, JB, Yang, W, Hornby, DD. 2019. Drainage network analysis and structuring of topologically noisy vector stream data. ISPRS International Journal of Geo-Information. 8(9), 422; DOI: 10.3390/ijgi8090422

See Also

repair_stream_vector_topology, strahler_stream_order, shreve_stream_magnitude

Function Signature

def vector_stream_network_analysis(self, streams: Vector, dem: Raster, max_ridge_cutting_height: float = 10.0, snap_distance: f64 = 0.001) -> Tuple[Vector, Vector, Vector, Vector]: ...

verbose

Determines whether tool functions output to stdout (wbe.verbose=True), or if output is suppressed (wbe.verbose=False).

version

Returns the Whitebox Workflows version information.

viewshed

This tool can be used to calculate the viewshed (i.e. the visible area) from a location (i.e. viewing station) or group of locations based on the topography defined by an input digital elevation model (DEM). The user must input a DEM (dem), a viewing station input vector file (stations) and the viewing height (height). Viewing station locations are specified as points within an input shapefile. The output image indicates the number of stations visible from each grid cell. The viewing height is in the same units as the elevations of the DEM and represent a height above the ground elevation from which the viewshed is calculated.

viewshed should be used when there are a relatively small number of target sites for which visibility needs to be assessed. If you need to assess general landscape visibility as a land-surface parameter, the visibility_index tool should be used instead.

Viewshed analysis is a very computationally intensive task. Depending on the size of the input DEM grid and the number of viewing stations, this operation may take considerable time to complete. Also, this implementation of the viewshed algorithm does not account for the curvature of the Earth. This should be accounted for if viewsheds are being calculated over very extensive areas.

See Also

visibility_index

Function Signature

def viewshed(self, dem: Raster, station_points: Vector, station_height: float = 2.0) -> Raster: ...

visibility_index

This tool can be used to calculate a measure of landscape visibility based on the topography of an input digital elevation model (DEM). The user must input DEM a (dem), the viewing height (height), and a resolution factor (res_factor). Viewsheds are calculated for a subset of grid cells in the DEM based on the resolution factor. The visibility index value (0.0-1.0) indicates the proportion of tested stations (determined by the resolution factor) that each cell is visible from. The viewing height is in the same units as the elevations of the DEM and represent a height above the ground elevation. Each tested grid cell's viewshed will be calculated in parallel. However, visibility index is one of the most computationally intensive geomorphometric indices to calculate. Depending on the size of the input DEM grid and the resolution factor, this operation may take considerable time to complete. If the task is too long-running, it is advisable to raise the resolution factor. A resolution factor of 2 will skip every second row and every second column (effectively evaluating the viewsheds of a quarter of the DEM's grid cells). Increasing this value decreases the number of calculated viewshed but will result in a lower accuracy estimate of overall visibility. In addition to the high computational costs of this index, the tool also requires substantial memory resources to operate. Each of these limitations should be considered before running this tool on a particular data set. This tool is best to apply on computer systems with high core-counts and plenty of memory.

See Also

viewshed

Function Signature

def visibility_index(self, dem: Raster, station_height: float = 2.0, resolution_factor: int = 8) -> Raster: ...

voronoi_diagram

This tool creates a vector Voronoi diagram for a set of vector points. The Voronoi diagram is the dual graph of the Delaunay triangulation. The tool operates by first constructing the Delaunay triangulation and then connecting the circumcenters of each triangle. Each Voronoi cell contains one point of the input vector points. All locations within the cell are nearer to the contained point than any other input point.

A dense frame of 'ghost' (hidden) points is inserted around the input point set to limit the spatial extent of the diagram. The frame is set back from the bounding box of the input points by 2 x the average point spacing. The polygons of these ghost points are not output, however, points that are situated along the edges of the data will have somewhat rounded (paraboloic) exterior boundaries as a result of this edge condition. If this property is unacceptable for application, clipping the Voronoi diagram to the convex hull may be a better alternative.

This tool works on vector input data only. If a Voronoi diagram is needed to tessellate regions associated with a set of raster points, use the euclidean_allocation tool instead. To use Voronoi diagrams for gridding data (i.e. raster interpolation), use the NearestNeighbourGridding tool.

See Also

construct_vector_tin, euclidean_allocation, NearestNeighbourGridding

Function Signature

def voronoi_diagram(self, input_points: Vector) -> Vector: ...

watershed

This tool will perform a watershedding operation based on a group of input vector pour points (pour_pts), i.e. outlets or points-of-interest. Watershedding is a procedure that identifies all of the cells upslope of a cell of interest (pour point) that are connected to the pour point by a flow-path. The user must input a D8-derived flow pointer (flow direction) raster (d8_pntr) and a vector pour point file (pour_pts). The pour points must be of a Point ShapeType (i.e. Point, PointZ, PointM, MultiPoint, MultiPointZ, MultiPointM). Watersheds will be assigned the input pour point FID value. The flow pointer raster must be generated using the D8 algorithm, d8_pointer.

Pour point vectors can be attained by on-screen digitizing to designate these points-of-interest locations. Because pour points are usually, although not always, situated on a stream network, it is recommended that you use Jenson's method (jenson_snap_pour_points) to snap pour points on the stream network. This will ensure that the digitized outlets are coincident with the digital stream contained within the DEM flowpaths. If this is not done prior to inputting a pour-point set to the watershed tool, anomalously small watersheds may be output, as pour points that fall off of the main flow path (even by one cell) in the D8 pointer will yield very different catchment areas.

If a raster pour point is specified instead of vector points, the watershed labels will derive their IDs from the grid cell values of all non-zero, non-NoData valued grid cells in the pour points file. Notice that this file can contain any integer data. For example, if a lakes raster, with each lake possessing a unique ID, is used as the pour points raster, the tool will map the watersheds draining to each of the input lake features. Similarly, a pour points raster may actually be a streams file, such as what is generated by the stream_link_identifier tool.

By default, the pointer raster is assumed to use the clockwise indexing method used by Whitebox Workflows. If the pointer file contains ESRI flow direction values instead, the esri_pntr must be True.

There are several tools that perform similar watershedding operations in Whitebox Workflows. watershed is appropriate to use when you have a set of specific locations for which you need to derive the watershed areas. Use the basins tool instead when you simply want to find the watersheds draining to each outlet situated along the edge of a DEM. The isobasins tool can be used to divide a landscape into roughly equally sized watersheds. The subbasins and strahler_order_basins are useful when you need to find the areas draining to each link within a stream network. Finally, hillslopes can be used to identify the areas draining the each of the left and right banks of a stream network.

Reference

Jenson, S. K. (1991), Applications of hydrological information automatically extracted from digital elevation models, Hydrological Processes, 5, 31–44, doi:10.1002/hyp.3360050104.

Lindsay JB, Rothwell JJ, and Davies H. 2008. Mapping outlet points used for watershed delineation onto DEM-derived stream networks, Water Resources Research, 44, W08442, doi:10.1029/2007WR006507.

See Also

d8_pointer, basins, subbasins, isobasins, strahler_order_basins, hillslopes, jenson_snap_pour_points, breach_depressions_least_cost, fill_depressions

Function Signature

def watershed(self, d8_pointer: Raster, pour_points: Vector, esri_pntr: bool = False) -> Raster: ...

watershed_from_raster_pour_points

This tool will perform a watershedding operation based on a group of input raster containing point points (pour_points). Watershedding is a procedure that identifies all of the cells upslope of a cell of interest (pour point) that are connected to the pour point by a flow-path. The user must input a D8-derived flow pointer (flow direction) raster (d8_pointer) and a pour points raster (pour_points). The flow pointer raster must be generated using the D8 algorithm, d8_pointer.

Watershed labels will derive their IDs from the grid cell values of all non-zero, non-NoData valued grid cells in the pour points file. Notice that this file can contain any integer data. For example, if a lakes raster, with each lake possessing a unique ID, is used as the pour points raster, the tool will map the watersheds draining to each of the input lake features. Similarly, a pour points raster may actually be a streams file, such as what is generated by the stream_link_identifier tool.

By default, the pointer raster is assumed to use the clockwise indexing method used by Whitebox Workflows. If the pointer file contains ESRI flow direction values instead, the esri_pntr parameter must be specified.

There are several tools that perform similar watershedding operations in Whitebox Workflows. watershed is appropriate to use when you have a set of specific locations for which you need to derive the watershed areas. Use the basins tool instead when you simply want to find the watersheds draining to each outlet situated along the edge of a DEM. The isobasins tool can be used to divide a landscape into roughly equally sized watersheds. The subbasins and strahler_order_basins are useful when you need to find the areas draining to each link within a stream network. Finally, hillslopes can be used to identify the areas draining the each of the left and right banks of a stream network.

Reference

Jenson, S. K. (1991), Applications of hydrological information automatically extracted from digital elevation models, Hydrological Processes, 5, 31–44, doi:10.1002/hyp.3360050104.

Lindsay JB, Rothwell JJ, and Davies H. 2008. Mapping outlet points used for watershed delineation onto DEM-derived stream networks, Water Resources Research, 44, W08442, doi:10.1029/2007WR006507.

See Also

d8_pointer, basins, subbasins, isobasins, strahler_order_basins, hillslopes, jenson_snap_pour_points, breach_depressions_least_cost, fill_depressions

Function Signature

def watershed_from_raster_pour_points(self, d8_pointer: Raster, pour_points: Raster, esri_pntr: bool = False) -> Raster: ...

weighted_overlay

This tool performs a weighted overlay on multiple input images. It can be used to combine multiple factors with varying levels of weight or relative importance. The WeightedOverlay tool is similar to the WeightedSum tool but is more powerful because it automatically converts the input factors to a common user-defined scale and allows the user to specify benefit factors and cost factors. A benefit factor is a factor for which higher values are more suitable. A cost factor is a factor for which higher values are less suitable. By default, WeightedOverlay assumes that input images are benefit factors, unless a cost value of 'true' is entered in the cost array. Constraints are absolute restriction with values of 0 (unsuitable) and 1 (suitable). This tool is particularly useful for performing multi-criteria evaluations (MCE).

Notice that the algorithm will convert the user-defined factor weights internally such that the sum of the weights is always equal to one. As such, the user can specify the relative weights as decimals, percentages, or relative weightings (e.g. slope is 2 times more important than elevation, in which case the weights may not sum to 1 or 100).

NoData valued grid cells in any of the input images will be assigned NoData values in the output image. The output raster is of the float data type and continuous data scale.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

Function Signature

def weighted_overlay(self, factors: List[Raster], weights: List[float], cost: List[Raster] = None, constraints: List[Raster] = None, scale_max: float = 1.0) -> Raster: ...

weighted_sum

This tool performs a weighted-sum overlay on multiple input raster images. If you have a stack of rasters that you would like to sum, each with an equal weighting (1.0), then use the sum_overlay tool instead.

Warning

Each of the input rasters must have the same spatial extent and number of rows and columns.

See Also

sum_overlay

Function Signature

def weighted_sum(self, input_rasters: List[Raster], weights: List[float]) -> Raster: ...

wetness_index

This tool can be used to calculate the topographic wetness index, commonly used in the TOPMODEL rainfall-runoff framework. The index describes the propensity for a site to be saturated to the surface given its contributing area and local slope characteristics. It is calculated as:

WI = Ln(As / tan(Slope))

Where As is the specific catchment area (i.e. the upslope contributing area per unit contour length) estimated using one of the available flow accumulation algorithms in the Hydrological Analysis toolbox. Notice that As must not be log-transformed prior to being used; log-transformation of As is a common practice when visualizing the data. The slope image should be measured in degrees and can be created from the base digital elevation model (DEM) using the slope tool. Grid cells with a slope of zero will be assigned NoData in the output image to compensate for the fact that division by zero is infinity. These very flat sites likely coincide with the wettest parts of the landscape. The input images must have the same grid dimensions.

Grid cells possessing the NoData value in either of the input images are assigned NoData value in the output image. The output raster is of the float data type and continuous data scale.

See Also slope, D8FlowAccumulation, DInfFlowAccumulation, FD8FlowAccumulation, breach_depressions_least_cost

Function Signature

def wetness_index(self, specific_catchment_area: Raster, slope: Raster) -> Raster: ...

wilcoxon_signed_rank_test

This tool will perform a Wilcoxon signed-rank test to evaluate whether a significant statistical difference exists between the two rasters. The Wilcoxon signed-rank test is often used as a non-parametric equivalent to the paired-samples Student's t-test, and is used when the distribution of sample difference values between the paired inputs is non-Gaussian. The null hypothesis of this test is that difference between the sample pairs follow a symmetric distribution around zero. i.e. that the median difference between pairs of observations is zero.

The user must specify the name of the two input raster images (input1 and input2) and the output report HTML file (output). The test can be performed optionally on the entire image or on a random sub-sample of pixel values of a user-specified size (num_samples). In evaluating the significance of the test, it is important to keep in mind that given a sufficiently large sample, extremely small and non-notable differences can be found to be statistically significant. Furthermore statistical significance says nothing about the practical significance of a difference. Note that cells with a difference of zero are excluded from the ranking and tied difference values are assigned their average rank values.

See Also

paired_sample_test, two_sample_ks_test

Function Signature

def wilcoxon_signed_rank_test(self, raster1: Raster, raster2: Raster, output_html_file: str, num_samples: int) -> None: ...

working_directory

Returns the current working directory.

write_function_memory_insertion

Jensen (2015) describes write function memory (WFM) insertion as a simple yet effective method of visualizing land-cover change between two or three dates. WFM insertion may be used to qualitatively inspect change in any type of registered, multi-date imagery. The technique operates by creating a red-green-blue (RGB) colour composite image based on co-registered imagery from two or three dates. If two dates are input, the first date image will be put into the red channel, while the second date image will be put into both the green and blue channels. The result is an image where the areas of change are displayed as red (date 1 is brighter than date 2) and cyan (date 1 is darker than date 2), and areas of little change are represented in grey-tones. The larger the change in pixel brightness between dates, the more intense the resulting colour will be.

If images from three dates are input, the resulting composite can contain many distinct colours. Again, more intense the colours are indicative of areas of greater land-cover change among the dates, while areas of little change are represented in grey-tones. Interpreting the direction of change is more difficult when three dates are used. Note that for multi-spectral imagery, only one band from each date can be used for creating a WFM insertion image.

Reference

Jensen, J. R. (2015). Introductory Digital Image Processing: A Remote Sensing Perspective.

See Also

create_colour_composite, change_vector_analysis

Function Signature

def write_function_memory_insertion(self, image1: Raster, image2: Raster, image3: Raster) -> Raster: ...

write_lidar

Writes an in-memory Lidar object to disc.

Parameters

  • lidar: Lidar - An in-memory Lidar object
  • file_name: str - The name of the file on disc. If the file_name does not contain the full file path, the file will be written to the Whitebox working directory.

write_raster

Writes an in-memory Raster object to file.

Parameters

  • raster: Raster - The Raster object to write to disc.
  • file_name: str - The file name to write to. If the file name does not contain the full file path, the file will be written to the Whitebox working directory.
  • compress: bool - Boolean flag that determines whether the output file is compressed. Not all raster formats support compression. Default is False.

write_text

Write an in-memory string to disc. This function is mainly intended for use with WbW frontends where there may be restrictions on scripts for writing files.

Parameters

  • text: String - The in-memory string object.
  • file_name: str - The file name to write to. If the file name does not contain the full file path, the file will be written to the Whitebox working directory.

write_vector

Write an in-memory Vector object to disc.

Parameters

  • vector: Vector - The in-memory Vector object.
  • file_name: str - The file name to write to. If the file name does not contain the full file path, the file will be written to the Whitebox working directory.

z_scores

This tool will transform the values in an input raster image (input) into z-scores. Z-scores are also called standard scores, normal scores, or z-values. A z-score is a dimensionless quantity that is calculated by subtracting the mean from an individual raw value and then dividing the difference by the standard deviation. This conversion process is called standardizing or normalizing and the result is sometimes referred to as a standardized variable. The mean and standard deviation are estimated using all values in the input image except for NoData values. The input image should not have a Boolean or categorical data scale, i.e. it should be on a continuous scale.

See Also

cumulative_distribution

Function Signature

def z_scores(self, raster: Raster) -> Raster: ...

zonal_statistics

This tool can be used to extract common descriptive statistics associated with the distribution of some underlying data raster based on feature units defined by a feature definition raster. For example, this tool can be used to measure the maximum or average slope gradient (data image) for each of a group of watersheds (feature definitions). Although the data raster can contain any type of data, the feature definition raster must be categorical, i.e. it must define area entities using integer values.

The stat parameter can take the values, 'mean', 'median', 'minimum', 'maximum', 'range', 'standard deviation', or 'total'.

If an output image name is specified, the tool will assign the descriptive statistic value to each of the spatial entities defined in the feature definition raster. If text output is selected, an HTML table will be output, which can then be readily copied into a spreadsheet program for further analysis. This is a very powerful and useful tool for creating numerical summary data from spatial data which can then be interrogated using statistical analyses. At least one output type (image or text) must be specified for the tool to operate.

NoData values in either of the two input images are ignored during the calculation of the descriptive statistic.

See Also

raster_summary_stats

Function Signature

def zonal_statistics(self, data_raster: Raster, feature_definitions_raster: Raster, stat_type: str = "mean") -> Tuple[Raster, str]: ...

WbW-Pro function documentation

Each of the following functions are methods of WbEnvironment class. Tools may be called using the convention in the following example:

from whitebox_workflows import WbEnvironment

wbe = WbEnvironment()
# Set up the environment, e.g. working directory, verbose mode, num_procs
raster = wbe.read_raster('my_raster.tif') # Read some kind of data
result = wbe.shape_index(raster) # Call some kind of function
...
  1. accumulation_curvature
  2. assess_route
  3. average_horizon_distance
  4. breakline_mapping
  5. canny_edge_detection
  6. classify_lidar
  7. colourize_based_on_class
  8. colourize_based_on_point_returns
  9. curvedness
  10. dbscan
  11. dem_void_filling
  12. depth_to_water
  13. difference_curvature
  14. evaluate_training_sites
  15. filter_lidar
  16. filter_lidar_by_percentile
  17. filter_lidar_by_reference_surface
  18. fix_dangling_arcs
  19. generalize_classified_raster
  20. generalize_with_similarity
  21. generating_function
  22. horizon_area
  23. horizontal_excess_curvature
  24. hydrologic_connectivity
  25. image_segmentation
  26. image_slider
  27. improved_ground_point_filter
  28. inverse_pca
  29. knn_classification
  30. knn_regression
  31. lidar_contour
  32. lidar_eigenvalue_features
  33. lidar_point_return_analysis
  34. lidar_sibson_interpolation
  35. local_hypsometric_analysis
  36. logistic_regression
  37. low_points_on_headwater_divides
  38. min_dist_classification
  39. modify_lidar
  40. multiscale_curvatures
  41. nibble
  42. openness
  43. parallelepiped_classification
  44. phi_coefficient
  45. piecewise_contrast_stretch
  46. prune_vector_streams
  47. random_forest_classification_fit
  48. random_forest_classification_predict
  49. random_forest_regression_fit
  50. random_forest_regression_predict
  51. reconcile_multiple_headers
  52. recover_flightline_info
  53. recreate_pass_lines
  54. remove_field_edge_points
  55. remove_raster_polygon_holes
  56. ridge_and_valley_vectors
  57. ring_curvature
  58. river_centerlines
  59. rotor
  60. shadow_animation
  61. shadow_image
  62. shape_index
  63. sieve
  64. sky_view_factor
  65. skyline_analysis
  66. slope_vs_aspect_plot
  67. smooth_vegetation_residual
  68. sort_lidar
  69. split_lidar
  70. svm_classification
  71. svm_regression
  72. topo_render
  73. topographic_position_animation
  74. topological_breach_burn
  75. unsphericity
  76. vertical_excess_curvature
  77. yield_filter
  78. yield_map
  79. yield_normalization

accumulation_curvature

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the accumulation curvature from a digital elevation model (DEM). Accumulation curvature is the product of profile (vertical) and tangential (horizontal) curvatures at a location (Shary, 1995). This variable has positive values, zero or greater. Florinsky (2017) states that accumulation curvature is a measure of the extent of local accumulation of flows at a given point in the topographic surface. Accumulation curvature is measured in units of m-2.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1995) Land surface in gravity points classification by a complete system of curvatures. Mathematical Geology 27: 373–390.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

tangential_curvature, profile_curvature, minimal_curvature, maximal_curvature, mean_curvature, gaussian_curvature

assess_route

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool assesses the variability in slope, elevation, and visibility along a line vector, which may be a footpath, road, river or any other route. The user must specify the name of the input line vector (routes), the input raster digital elevation model file (dem), and the output line vector (output). The algorithm initially splits the input line vector in equal-length segments (length). For each line segment, the tool then calculates the average slope (AVG_SLOPE), minimum and maximum elevations (MIN_ELEV, MAX_ELEV), the elevation range or relief (RELIEF), the path sinuosity (SINUOSITY), the number of changes in slope direction or breaks-in-slope (CHG_IN_SLP), and the maximum visibility (VISIBILITY). Each of these metrics are output to the attribute table of the output vector, along with the feature identifier (FID); any attributes associated with the input parent feature will also be copied into the output table. Slope and elevation metrics are measured along the 2D path based on the elevations of each of the row and column intersection points of the raster with the path, estimated from linear-interpolation using the two neighbouring elevations on either side of the path. Sinuosity is calculated as the ratio of the along-surface (i.e. 3D) path length, divided by the 3D distance between the start and end points of the segment. CHG_IN_SLP can be thought of as a crude measure of path roughness, although this will be very sensitive to the quality of the DEM. The visibility metric is based on the Yokoyama et al. (2002) openness index, which calculates the average horizon angle in the eight cardal directions to a maximum search distance (dist), measured in grid cells.

Note that the input DEM must be in a projected coordinate system. The DEM and the input routes vector must be also share the same coordinate system. This tool also works best when the input DEM is of high quality and fine spatial resolution, such as those derived from LiDAR data sets.

Maximum segment visibility:

Average segment slope:

For more information about this tool, see this blog on the WhiteboxTools homepage.

See Also

split_vector_lines, openness

average_horizon_distance

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

This tool calculates the spatial pattern of average distance to the horizon based on an input digital elevation model (DEM). As such, the index is a measure of landscape visibility. In the image below, lighter areas have a longer average distance to the horizon, measured in map units.

The user must specify an input DEM (dem), the azimuth fraction (az_fraction), the maximum search distance (max_dist), and the height offset of the observer (observer_hgt_offset). The input DEM should usually be a digital surface model (DSM) that contains significant off-terrain objects. Such a model, for example, could be created using the first-return points of a LiDAR data set, or using the lidar_digital_surface_model tool. The azimuth fraction should be an even divisor of 360-degrees and must be between 1-45 degrees.

The tool operates by calculating horizon angle (see horizon_angle) rasters from the DSM based on the user-specified azimuth fraction (az_fraction). For example, if an azimuth fraction of 15-degrees is specified, horizon angle rasters would be calculated for the solar azimuths 0, 15, 30, 45... A horizon angle raster evaluates the vertical angle between each grid cell in a DSM and a distant obstacle (e.g. a mountain ridge, building, tree, etc.) that obscures the view in a specified direction. In calculating horizon angle, the user must specify the maximum search distance (max_dist), in map units, beyond which the query for higher, more distant objects will cease. This parameter strongly impacts the performance of the function, with larger values resulting in significantly longer processing-times.

The observer_hgt_offset parameter can be used to add an increment to the source cell's elevation. For example, the following image shows the spatial pattern derived from a LiDAR DSM using observer_hgt_offset = 0.0:

Notice that there are several places, plarticularly on the flatter rooftops, where the local noise in the LiDAR DEM, associated with the individual scan lines, has resulted in a noisy pattern in the output. By adding a small height offset of the scale of this noise variation (0.15 m), we see that most of this noisy pattern is removed in the output below:

This feature makes the function more robust against DEM noise. As another example of the usefulness of this additional parameter, in the image below, the observer_hgt_offset parameter has been used to measure the pattern of the index at a typical human height (1.7 m):

Notice how at this height the average horizon distance becomes much farther on some of the flat rooftops where a guard wall prevents further viewing areas at shorter observer heights.

The output of this function is similar to the Average View Distance provided by the Sky View tool in Saga GIS. However, for a given maximum search distance, the Whitebox tool is likely faster to compute and has the added advantage of offering the observer's height parameter, as described above.

See Also

sky_view_factor, horizon_area, openness, lidar_digital_surface_model, horizon_angle

Function Signature

def average_horizon_distance(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), observer_hgt_offset: float = 0.0) -> Raster: ...

breakline_mapping

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to map breaklines in an input digital elevation model (DEM; input). Breaklines are locations of high surface curvature, in any direction, measured using curvedness. Curvedness values are log-transformed using the resolution-dependent method proposed by Shary et al. (2002). Breaklines are coincident with grid cells that have log-transformed curvedness values exceeding a user-specified threshold value (thresshold). While curvedness is measured within the range 0 to infinity, values are typically lower. Appropriate values for the threshold parameter are commonly in the 1 to 5 range. Lower threshold values will result in more extensive breakline mapping and vice versa. The algorithm will vectorize breakline features and the output of this tool (output) is a line vector. Line features that are less than a user-specified length (in grid cells; min_length), will not be output.

Watch the breakline mapping video for an example of how to run the tool.

References

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

curvedness

canny_edge_detection

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a Canny edge-detection filtering operation on an input image (input). The Canny edge-detection filter is a multi-stage filter that combines a Gassian filtering (gaussian_filter) operation with various thresholding operations to generate a single-cell wide edges output raster (output). The sigma parameter, measured in grid cells determines the size of the Gaussian filter kernel. The low and high parameters determine the characteristics of the thresholding steps; both parameters range from 0.0 to 1.0.

By default, the output raster will be Boolean, with 1's designating edge-cells. It is possible, using the add_back parameter to add the edge cells back into the original image, providing an edge-enchanced output, similar in concept to the unsharp_masking operation.

References

This implementation was inspired by the algorithm described here: https://towardsdatascience.com/canny-edge-detection-step-by-step-in-python-computer-vision-b49c3a2d8123

See Also

gaussian_filter, sobel_filter, unsharp_masking, scharr_filter

classify_lidar

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool provides a basic classification of a LiDAR point cloud into ground, building, and vegetation classes. The algorithm performs the classification based on point neighbourhood geometric properties, including planarity, linearity, and height above the ground. There is also a point segmentation involved in the classification process.

The user may specify the names of the input and output LiDAR files (input and output). Note that if the user does not specify the optional input/output LiDAR files, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful for processing a large number of LiDAR files in batch mode. When this batch mode is applied, the output file names will be the same as the input file names but with a '_classified' suffix added to the end.

The search distance (radius), defining the radius of the neighbourhood window surrounding each point, must also be specified. If this parameter is set to a value that is too large, areas of high surface curvature on the ground surface will be left unclassed and smaller buildings, e.g. sheds, will not be identified. If the parameter is set too small, areas of low point density may provide unsatisfactory classification values. The larger this search distance is, the longer the algorithm will take to processs a data set. For many airborne LiDAR data sets, a value between 1.0 - 3.0 meters is likely appropriate.

The ground threshold parameter (grd_threshold) determines how far above the tophat-transformed surface a point must be to be excluded from the ground surface. This parameter also determines the maximum distance a point can be from a plane or line model fit to a neighbourhood of points to be considered part of the model geometry. Similarly the off-terrain object threshold parameter (oto_threshold) is used to determine how high above the ground surface a point must be to be considered either a vegetation or building point. The ground threshold must be smaller than the off-terrain object threshold. If you find that breaks-in-slope in areas of more complex ground topography are left unclassed (class = 1), this can be addressed by raising the ground threshold parameter.

The planarity and linearity thresholds (planarity_threshold and linearity_threshold) describe the minimum proportion (0-1) of neighbouring points that must be part of a fitted model before the point is considered to be planar or linear. Both of these properties are used by the algorithm in a variety of ways to determine final class values. Planar and linear models are fit using a RANSAC-like algorithm, with the main user-specified parameter of the number of iterations (iterations). The larger the number of iterations the greater the processing time will be.

The facade threshold (facade_threshold) is the last user-specified parameter, and determines the maximum horizontal distance that a point beneath a rooftop edge point may be to be considered part of the building facade (i.e. walls). The default value is 0.5 m, although this value will depend on a number of factors, such as whether or not the building has balconies.

The algorithm generally does very well to identify deciduous (broad-leaf) trees but can at times struggle with incorrectly classifying dense coniferous (needle-leaf) trees as buildings. When this is the case, you may counter this tendency by lowering the planarity threshold parameter value. Similarly, the algorithm will generally leave overhead power lines as unclassified (class = 1), howevever, if you find that the algorithm misclassifies most such points as high vegetation (class = 5), this can be countered by lowering the linearity threshold value.

Note that if the input file already contains class data, these data will be overwritten in the output file.

See Also

colourize_based_on_class, filter_lidar, modify_lidar, sort_lidar, split_lidar

colourize_based_on_class

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tools sets the RGB colour values of an input LiDAR point cloud (input) based on the point classifications. Rendering a point cloud in this way can aid with the determination of point classification accuracy, by allowing you to determine if there are certain areas within a LiDAR tile, or certain classes, that are problematic during the point classification process.

By default, the tool renders buildings in red (see table below). However, the tool also provides the option to render each building in a unique colour (use_unique_clrs_for_buildings), providing a visually stunning LiDAR-based map of built-up areas. When this option is selected, the user must also specify the radius parameter, which determines the search distance used during the building segmentation operation. The radius parameter is optional, and if unspecified (when the use_unique_clrs_for_buildings flag is used), a value of 2.0 will be used.

The specific colours used to render each point class can optionally be set by the user with the clr_str parameter. The value of this parameter may list specific class values (0-18) and corresponding colour values in either a red-green-blue (RGB) colour triplet form (i.e. (r, g, b)), or or a hex-colour, of either form #e6d6aa or 0xe6d6aa (note the # and 0x prefixes used to indicate hexadecimal numbers; also either lowercase or capital letter values are acceptable). The following is an example of the a valid clr_str that sets the ground (class 2) and high vegetation (class 5) colours used for rendering:

2: (184, 167, 108); 5: #9ab86c

Notice that 1) each class is separated by a semicolon (';'), 2) class values and colour values are separated by colons (':'), and 3) either RGB and hex-colour forms are valid.

If a clr_str parameter is not provided, the tool will use the default colours used for each class (see table below).

Class values are assumed to follow the class designations listed in the LAS specification:

Classification ValueMeaningDefault Colour
0Created never classified
1Unclassified
2Ground
3Low Vegetation
4Medium Vegetation
5High Vegetation
6Building
7Low Point (noise)
8Reserved
9Water
10Rail
11Road Surface
12Reserved
13Wire – Guard (Shield)
14Wire – Conductor (Phase)
15Transmission Tower
16Wire-structure Connector (e.g. Insulator)
17Bridge Deck
18High noise

The point RGB colour values can be blended with the intensity data to create a particularly effective visualization, further enhancing the visual interpretation of point return properties. The intensity_blending parameter value, which must range from 0% (no intensity blending) to 100% (all intensity), is used to set the degree of intensity/RGB blending.

Because the output file contains RGB colour data, it is possible that it will be larger than the input file. If the input file does contain valid RGB data, the output will be similarly sized, but the input colour data will be replaced in the output file with the point-return colours.

The output file can be visualized using any point cloud renderer capable of displaying point RGB information. We recommend the plas.io LiDAR renderer but many similar open-source options exist.

See Also

colourize_based_on_point_returns, lidar_colourize

colourize_based_on_point_returns

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool sets the RGB colour values of a LiDAR point cloud (input) based on the point returns. It specifically renders only-return, first-return, intermediate-return, and last-return points in different colours, storing these data in the RGB colour data of the output LiDAR file (output). Colourizing the points in a LiDAR point cloud based on return properties can aid with the visual inspection of point distributions, and therefore, the quality assurance/quality control (QA/QC) of LiDAR data tiles. For example, this visualization process can help to determine if there are areas of vegetation where there is insufficient coverage of ground points, perhaps due to acquisition of the data during leaf-on conditions. There is often an assumption in LiDAR data processing that the ground surface can be modelled using a subset of the only-return and last-return points (beige and blue in the image below). However, under heavy forest cover, and in particular if the data were collected during leaf-on conditions or if there is significant coverage of conifer trees, the only-return and last-return points may be poor approximations of the ground surface. This tool can help to determine the extent to which this is the case for a particular data set.

The specific colours used to render each return type can be set by the user with the only, first, intermediate, and last parameters. Each parameter takes either a red-green-blue (RGB) colour triplet, of the form (r,g,b), or a hex-colour, of either form #e6d6aa or 0xe6d6aa (note the # and 0x prefixes used to indicate hexadecimal numbers; also either lowercase or capital letter values are acceptable).

The point RGB colour values can be blended with the intensity data to create a particularly effective visualization, further enhancing the visual interpretation of point return properties. The intensity_blending parameter value, which must range from 0% (no intensity blending) to 100% (all intensity), is used to set the degree of intensity/RGB blending.

Because the output file contains RGB colour data, it is possible that it will be larger than the input file. If the input file does contain valid RGB data, the output will be similarly sized, but the input colour data will be replaced in the output file with the point-return colours.

The output file can be visualized using any point cloud renderer capable of displaying point RGB information. We recommend the plas.io LiDAR renderer but many similar open-source options exist.

This tool is a convenience function and can alternatively be achieved using the modify_lidar tool with the statement:

rgb=if(is_only, (230,214,170), if(is_last, (0,0,255), if(is_first, (0,255,0), (255,0,255))))

The colourize_based_on_point_returns tool is however significantly faster for this operation than the modify_lidar tool because the expression above must be executed dynamically for each point.

See Also

modify_lidar, lidar_colourize

curvedness

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the curvedness (Koenderink and van Doorn, 1992) from a digital elevation model (DEM). Curvedness is the root mean square of maximal and minimal curvatures, and measures the magnitude of surface bending, regardless of shape (Florinsky, 2017). Curvedness is characteristically low-values for flat areas and higher for areas of sharp bending (Florinsky, 2017). The index is also inversely proportional with the size of the object (Koenderink and van Doorn, 1992). Curvedness has values equal to or greater than zero and is measured in units of m-1.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Raw curvedness values are often challenging to visualize given their range and magnitude, and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Koenderink, J. J., and Van Doorn, A. J. (1992). Surface shape and curvature scales. Image and vision computing, 10(8), 557-564.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

shape_index, minimal_curvature, maximal_curvature, tangential_curvature, profile_curvature, mean_curvature, gaussian_curvature

dbscan

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs an unsupervised DBSCAN clustering operation, based on a series of input rasters (inputs). Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space. The DBSCAN algorithm identifies clusters in feature space by identifying regions of high density (core points) and the set of points connected to these high-density areas. Points in feature space that are not connected to high-density regions are labeled by the DBSCAN algorithm as 'noise' and the associated grid cell in the output raster (output) is assigned the nodata value. Areas of high density (i.e. core points) are defined as those points for which the number of neighbouring points within a search distance (search_dist) is greater than some user-defined minimum threshold (min_points).

The main advantages of the DBSCAN algorithm over other clustering methods, such as k-means (k_means_clustering), is that 1) you do not need to specify the number of clusters a priori, and 2) that the method does not make assumptions about the shape of the cluster (spherical in the k-means method). However, DBSCAN does assume that the density of every cluster in the data is approximately equal, which may not be a valid assumption. DBSCAN may also produce unsatisfactory results if there is significant overlap among clusters, as it will aggregate the clusters. Finding search distance and minimum core-point density thresholds that apply globally to the entire data set may be very challenging or impossible for certain applications.

The DBSCAN algorithm is based on the calculation of distances in multi-dimensional space. Feature scaling is essential to the application of DBSCAN clustering, especially when the ranges of the features are different, for example, if they are measured in different units. Without scaling, features with larger ranges will have greater influence in computing the distances between points. The tool offers three options for feature-scaling (scaling), including 'None', 'Normalize', and 'Standardize'. Normalization simply rescales each of the features onto a 0-1 range. This is a good option for most applications, but it is highly sensitive to outliers because it is determined by the range of the minimum and maximum values. Standardization rescales predictors using their means and standard deviations, transforming the data into z-scores. This is a better option than normalization when you know that the data contain outlier values; however, it does does assume that the feature data are somewhat normally distributed, or are at least symmetrical in distribution.

One should keep the impact of feature scaling in mind when setting the search_dist parameter. For example, if applying normalization, the entire range of values for each dimension of feature space will be bound within the 0-1 range, meaning that the search distance should be smaller than 1.0, and likely significantly smaller. If standardization is used instead, features space is technically infinite, although the vast majority of the data are likely to be contained within the range -2.5 to 2.5.

Because the DBSCAN algorithm calculates distances in feature-space, like many other related algorithms, it suffers from the curse of dimensionality. Distances become less meaningful in high-dimensional space because the vastness of these spaces means that distances between points are less significant (more similar). As such, if the predictor list includes insignificant or highly correlated variables, it is advisable to exclude these features during the model-building phase, or to use a dimension reduction technique such as principal_component_analysis to transform the features into a smaller set of uncorrelated predictors.

Memory Usage

The peak memory usage of this tool is approximately 8 bytes per grid cell × # predictors.

See Also

k_means_clustering, modified_k_means_clustering, principal_component_analysis

dem_void_filling

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool implements a modified version of the Delta Surface Fill method of Grohman et al. (2006). It can fill voids (i.e., data holes) contained within a digital elevation model (dem) by fusing the data with a second DEM (fill) that defines the topographic surface within the void areas. The two surfaces are fused seamlessly so that the transition from the source and fill surfaces is undetectable. The fill surface need not have the same resolution as the source DEM.

The algorithm works by computing a DEM-of-difference (DoD) for each valid grid cell in the source DEM that also has a valid elevation in the corresponding location within the fill DEM. This difference surface is then used to define offsets within the near void-edge locations. The fill surface elevations are then combined with interpolated offsets, with the interpolation based on near-edge offsets, and used to define a new surface within the void areas of the source DEM in such a way that the data transitions seamlessly from the source data source to the fill data. The image below provides an example of this method.

The user must specify the mean_plane_dist parameter, which defines the distance (measured in grid cells) within a void area from the void's edge. Grid cells within larger voids that are beyond this distance from their edges have their vertical offsets, needed during the fusion of the DEMs, set to the mean offset for all grid cells that have both valid source and fill elevations. Void cells that are nearer their void edges have vertical offsets that are interpolated based on nearby offset values (i.e., the DEM of difference). The interpolation uses inverse-distance weighted (IDW) scheme, with a user-specified weight parameter (weight_value).

The edge_treatment parameter describes how the data fusion operates at the edges of voids, i.e., the first line of grid cells for which there are both source and fill elevation values. This parameter has values of "use DEM", "use Fill", and "average". Grohman et al. (2006) state that sometimes, due to a weakened signal within these marginal locations between the area of valid data and voids, the estimated elevation values are inaccurate. When this is the case, it is best to use fill elevations in the transitional areas. If this isn't the case, the "use DEM" is the better option. A compromise between the two options is to average the two elevation sources.

References

Grohman, G., Kroenung, G. and Strebeck, J., 2006. Filling SRTM voids: The delta surface fill method. Photogrammetric Engineering and Remote Sensing, 72(3), pp.213-216.

Function Signature

def dem_void_filling(self, dem: Raster, fill: Raster, mean_plane_dist: int = 20, edge_treatment: str = "dem", weight_value: float = 2.0) -> Raster: ...

depth_to_water

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the cartographic depth-to-water (DTW) index described by Murphy et al. (2009). The DTW index has been shown to be related to soil moisture, and is useful for identifying low-lying positions that are likely to experience surface saturated conditions. In this regard, it is similar to each of wetness_index, elevation_above_stream (HAND), and probability-of-depressions (i.e. stochastic_depression_analysis).

The index is the cumulative slope gradient along the least-slope path connecting each grid cell in an input DEM (dem) to a surface water cell. Tangent slope (i.e. rise / run) is calculated for each grid cell based on the neighbouring elevation values in the input DEM. The algorithm operates much like a cost-accumulation analysis (cost_distance), where the cost of moving through a cell is determined by the cell's tangent slope value and the distance travelled. Therefore, lower DTW values are associated with wetter soils and higher values indicate drier conditions, over longer time periods. Areas of surface water have DTW values of zero. The user must input surface water features, including vector stream lines (streams) and/or vector waterbody polygons (lakes, i.e. lakes, ponds, wetlands, etc.). At least one of these two optional water feature inputs must be specified. The tool internally rasterizes these vector features, setting the DTW value in the output raster to zero. DTW tends to increase with greater distances from surface water features, and increases more slowly in flatter topography and more rapidly in steeper settings. Murphy et al. (2009) state that DTW is a probablistic model that assumes uniform soil properties, climate, and vegetation.

Note that DTW values are highly dependent upon the accuracy and extent of the input streams/lakes layer(s).

References

Murphy, PNC, Gilvie, JO, and Arp, PA (2009) Topographic modelling of soil moisture conditiTons: a comparison and verification of two models. European Journal of Soil Science, 60, 94–109, DOI: 10.1111/j.1365-2389.2008.01094.x.

See Also

wetness_index, elevation_above_stream, stochastic_depression_analysis

difference_curvature

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the difference curvature from a digital elevation model (DEM). Difference curvature is half of the difference between profile and tangential curvatures, sometimes called the vertical and horizontal curvatures (Shary, 1995). This variable has an unbounded range that can take either positive or negative values. Florinsky (2017) states that difference curvature measures the extent to which the relative deceleration of flows (measured by kv) is higher than flow convergence at a given point of the topographic surface. Difference curvature is measured in units of m-1.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1995) Land surface in gravity points classification by a complete system of curvatures. Mathematical Geology 27: 373–390.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

profile_curvature, tangential_curvature, rotor, minimal_curvature, maximal_curvature, mean_curvature, gaussian_curvature

evaluate_training_sites

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs an evaluation of the reflectance properties of multi-spectral image dataset for a group of digitized class polygons. This is often viewed as the first step in a supervised classification procedure, such as those performed using the min_dist_classification or parallelepiped_classification tools. The analysis is based on a series of one or more input images (inputs) and an input polygon vector file (polys). The user must also specify the attribute name (field), within the attribute table, containing the class ID associated with each feature in input the polygon vector. A single class may be designated by multiple polygon features in the test site polygon vector. Note that the input polygon file is generally created by digitizing training areas of exemplar reflectance properties for each class type. The input polygon vector should be in the same coordinate system as the input multi-spectral images. The input images must represent a multi-spectral data set made up of individual bands. Do not input colour composite images. Lastly, the user must specify the name of the output HTML file. This file will contain a series of box-and-whisker plots, one for each band in the multi-spectral data set, that visualize the distribution of each class in the associated bands. This can be helpful in determining the overlap between spectral properties for the classes, which may be useful if further class or test site refinement is necessary. For a subsequent supervised classification to be successful, each class should not overlap significantly with the other classes in at least one of the input bands. If this is not the case, the user may need to refine the class system.

See Also

min_dist_classification, parallelepiped_classification

Function Signature

def evaluate_training_sites(self, input_rasters: List[Raster], training_polygons: Vector, class_field_name: str, output_html_file: str) -> None: ...

filter_lidar

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

The FilterLidar tool is a very powerful tool for filtering points within a LiDAR point cloud based on point properties. Complex filter statements (statement) can be used to include or exclude points in the output file (output).

Note that if the user does not specify the optional input LiDAR file (input), the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful for processing a large number of LiDAR files in batch mode. When this batch mode is applied, the output file names will be the same as the input file names but with a '_filtered' suffix added to the end.

Points are either included or excluded from the output file by creating conditional filter statements. Statements must be valid Rust syntax and evaluate to a Boolean. Any of the following variables are acceptable within the filter statement:

Variable NameDescription
xThe point x coordinate
yThe point y coordinate
zThe point z coordinate
intensityThe point intensity value
retThe point return number
nretThe point number of returns
is_onlyTrue if the point is an only return (i.e. ret == nret == 1), otherwise false
is_multipleTrue if the point is a multiple return (i.e. nret > 1), otherwise false
is_earlyTrue if the point is an early return (i.e. ret == 1), otherwise false
is_intermediateTrue if the point is an intermediate return (i.e. ret > 1 && ret < nret), otherwise false
is_lateTrue if the point is a late return (i.e. ret == nret), otherwise false
is_firstTrue if the point is a first return (i.e. ret == 1 && nret > 1), otherwise false
is_lastTrue if the point is a last return (i.e. ret == nret && nret > 1), otherwise false
classThe class value in numeric form, e.g. 0 = Never classified, 1 = Unclassified, 2 = Ground, etc.
is_noiseTrue if the point is classified noise (i.e. class == 7
is_syntheticTrue if the point is synthetic, otherwise false
is_keypointTrue if the point is a keypoint, otherwise false
is_withheldTrue if the point is withheld, otherwise false
is_overlapTrue if the point is an overlap point, otherwise false
scan_angleThe point scan angle
scan_directionTrue if the scanner is moving from the left towards the right, otherwise false
is_flightline_edgeTrue if the point is situated along the filightline edge, otherwise false
user_dataThe point user data
point_source_idThe point source ID
scanner_channelThe point scanner channel
timeThe point GPS time, if it exists, otherwise 0
redThe point red value, if it exists, otherwise 0
greenThe point green value, if it exists, otherwise 0
blueThe point blue value, if it exists, otherwise 0
nirThe point near infrared value, if it exists, otherwise 0
pt_numThe point number within the input file
n_ptsThe number of points within the file
min_xThe file minimum x value
mid_xThe file mid-point x value
max_xThe file maximum x value
min_yThe file minimum y value
mid_yThe file mid-point y value
max_yThe file maximum y value
min_zThe file minimum z value
mid_zThe file mid-point z value
max_zThe file maximum z value
dist_to_ptThe distance from the point to a specified xy or xyz point, e.g. dist_to_pt(562500, 4819500) or dist_to_pt(562500, 4819500, 320)
dist_to_lineThe distance from the point to the line passing through two xy points, e.g. dist_to_line(562600, 4819500, 562750, 4819750)
dist_to_line_segThe distance from the point to the line segment defined by two xy end-points, e.g. dist_to_line_seg(562600, 4819500, 562750, 4819750)
within_rect1 if the point falls within the bounds of a 2D or 3D rectangle, otherwise 0. Bounds are defined as within_rect(ULX, ULY, LRX, LRY) or within_rect(ULX, ULY, ULZ, LRX, LRY, LRZ)

In addition to the point properties defined above, if the user applies the lidar_eigenvalue_features tool on the input LiDAR file, the filter_lidar tool will automatically read in the additional *.eigen file, which include the eigenvalue-based point neighbourhood measures, such as lambda1, lambda2, lambda3, linearity, planarity, sphericity, omnivariance, eigentropy, slope, and residual. See the lidar_eigenvalue_features documentation for details on each of these metrics describing the structure and distribution of points within the neighbourhood surrounding each point in the LiDAR file.

Statements can be as simple or complex as desired. For example, to filter out all points that are classified noise (i.e. class numbers 7 or 18):

!is_noise

The following is a statement to retain only the late returns from the input file (i.e. both last and single returns):

ret == nret

Notice that equality uses the == symbol an inequality uses the != symbol. As an equivalent to the above statement, we could have used the is_late point property:

is_late

If we want to remove all points outside of a range of xy values:

x >= 562000 && x <= 562500 && y >= 4819000 && y <= 4819500

Notice how we can combine multiple constraints using the && (logical AND) and || (logical OR) operators. As an alternative to the above statement, we could have used the within_rect function:

within_rect(562000, 4819500, 562500, 4819000)

If we want instead to exclude all of the points within this defined region, rather than to retain them, we simply use the ! (logial NOT):

!(x >= 562000 && x <= 562500 && y >= 4819000 && y <= 4819500)

or, simply:

!within_rect(562000, 4819500, 562500, 4819000)

If we need to find all of the ground points within 150 m of (562000, 4819500), we could use:

class == 2 && dist_to_pt(562000, 4819500) <= 150.0

The following statement outputs all non-vegetation classed points in the upper-right quadrant:

!(class == 3 && class != 4 && class != 5) && x < min_x + (max_x - min_x) / 2.0 && y > max_y - (max_y - min_y) / 2.0

As demonstrated above, the filter_lidar tool provides an extremely flexible, powerful, and easy means for retaining and removing points from point clouds based on any of the common LiDAR point attributes.

See Also

filter_lidar_classes, filter_lidar_scan_angles, modify_lidar, erase_polygon_from_lidar, clip_lidar_to_polygon, sort_lidar, lidar_eigenvalue_features

filter_lidar_by_percentile

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to extract a subset of points from an input LiDAR point cloud (input_lidar) that correspond to a user-specified percentile of the points within the local neighbourhood. The algorithm works by overlaying a grid of a specified size (block_size). The group of LiDAR points contained within each block in the superimposed grid are identified and are sorted by elevation. The point with the elevation that corresponds most closely to the specified percentile is then inserted into the output LiDAR point cloud. For example, if percentile = 0.0, the lowest point within each block will be output, if percentile = 100.0 the highest point will be output, and if percentile = 50.0 the point that is nearest the median elevation will be output. Notice that the lower the number of points contained within a block, the more approximate the calculation will be. For example, if a block only contains three points, no single point occupies the 25th percentile. The equation that is used to identify the closest corresponding point (zero-based) from a list of n sorted by elevation values is:

point_num = ⌊percentile / 100.0 * (n - 1)⌉

Increasing the block size (default is 1.0 xy-units) will increase the average number of points within blocks, allowing for a more accurate percentile calculation.

Like many of the LiDAR functions, the input LiDAR point cloud (input_lidar) is optional. If an input LiDAR file is not specified, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be very useful when you need to process a large number of LiDAR files contained within a directory. This batch processing mode enables the function to run in a more optimized parallel manner. When run in this batch mode, no output LiDAR object will be created. Instead the function will create an output file (saved to disc) with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

See Also

filter_lidar, lidar_block_minimum, lidar_block_maximum

Function Signature

def filter_lidar_by_percentile(self, input_lidar: Optional[Lidar],  percentile: float = 0.0, block_size: float = 1.0) -> Optional[Lidar]: ...

filter_lidar_by_reference_surface

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to extract a subset of points from an input LiDAR point cloud (input_lidar) that satisfy a query relation with a user-specified raster reference surface (ref_surface). For example, you may use this function to extract all of the points that are below (query="<" or query="<=") or above (query=">" or query=">=") a surface model. The default query mode is "within" (i.e. query="within"), which extracts all of the points that are within a specified absolute vertical distance (threshold) of the surface. Notice that the threshold parameter is ignored for query types other than "within".

Unlike many of the LiDAR functions, this function does not have a batch mode and operates on single tiles only.

See Also

filter_lidar

Function Signature

def filter_lidar_by_reference_surface(self, input_lidar: Lidar, ref_surface: Raster, query: str = "within", threshold: float = 0.0) -> Lidar: ...

fix_dangling_arcs

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to fix undershot and overshot arcs, two common topological errors, in an input vector lines file (input). In addition to the input lines vector, the user must also specify the output vector (output) and the snap distance (snap). All dangling arcs that are within this threshold snap distance of another line feature will be connected to the neighbouring feature. If the input lines network is a vector stream network, users are advised to apply the repair_stream_vector_topology tool instead.

See Also

repair_stream_vector_topology, clean_vector

generalize_classified_raster

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to generalize a raster containing class or object features. Such rasters are usually derived from some classification procedure (e.g. image classification and landform classification), or as the output of a segmentation procedure (image_segmentation). Rasters that are created in this way often contain many very small features that make their interpretation, or vectorization, challenging. Therefore, it is common for practitioners to remove the smaller features. Many different approaches have been used for this task in the past. For example, it is common to remove small features using a filtering based approach (majority_filter). While this can be an effective strategy, it does have the disadvantage of modifying all of the boundaries in the class raster, including those that define larger features. In many applications, this can be a serious issue of concern.

The generalize_classified_raster tool offers an alternative method for simplifying class rasters. The process begins by identifying each contiguous group of cells in the input (i.e. a clumping operation) and then defines the subset of features that are smaller than the user-specified minimum feature size (min_size), in grid cells. This set of small features is then dealt with using one of three methods (method). In the first method (longest), a small feature may be reassigned the class value of the neighbouring feature with the longest shared border. The sum of the neighbouring feature size and the small feature size must be larger than the specified size threshold, and the tool will iterate through this process of reassigning feature values to neighbouring values until each small feature has been resolved.

The second method, largest, operates in much the same way as the first, except that objects are reassigned the value of the largest neighbour. Again, this process of reassigning small feature values iterates until every small feature has been reassigned to a large neighbouring feature.

The third and last method (nearest) takes a different approach to resolving the reassignment of small features. Using the nearest generalization approach, each grid cell contained within a small feature is reassigned the value of the nearest large neighbouring feature. When there are two or more neighbouring features that are equally distanced to a small feature cell, the cell will be reassigned to the largest neighbour. Perhaps the most significant disadvantage of this approach is that it creates a new artificial boundary in the output image that is not contained within the input class raster. That is, with the previous two methods, boundaries associated with smaller features in the input images are 'erased' in the output map, but every boundary in the output raster exactly matches boundaries within the input raster (i.e. the output boundaries are a subset of the input feature boundaries). However, with the nearest method, artificial boundaries, determined by the divide between nearest neighbours, are introduced to the output raster and these new feature boundaries do not have any basis in the original classification/segmentation process. Thus caution should be exercised when using this approach, especially when larger minimum size thresholds are used. The longest method is the recommended approach to class feature generalization.

For a video tutorial on how to use the generalize_classified_raster tool, see this YouTube video.

See Also

generalize_with_similarity, majority_filter, image_segmentation

Function Signature

def generalize_classified_raster(self, raster: Raster, area_threshold: int = 5, method: str = "longest") -> Raster: ...

generalize_with_similarity

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to generalize a raster containing class features (input) by reassigning the identifier values of small features (min_size) to those of neighbouring features. Therefore, this tool performs a very similar operation to the generalize_classified_raster tool. However, while the generalize_classified_raster tool re-labels small features based on the geometric properties of neighbouring features (e.g. neighbour with the longest shared border, largest neighbour, or nearest neighbour), the generalize_with_similarity tool reassigns feature labels based on similarity with neighbouring features. Similarity is determined using a series of input similarity criteria rasters (similarity), which may be factors used in the creation of the input class raster. For example, the similarlity rasters may be bands of multi-spectral imagery, if the input raster is a classified land-cover map, or DEM-derived land surface parameters, if the input raster is a landform class map.

The tool works by identifying each contiguous group of pixels (features) in the input class raster (input), i.e. a clumping operation. The mean value is then calculated for each feature and each similarity input, which defines a multi-dimensional 'similarity centre point' associated with each feature. It should be noted that the similarity raster data are standardized prior to calculating these centre point values. Lastly, the tool then reassigns the input label values of all features smaller than the user-specified minimum feature size (min_size) to that of the neighbouring feature with the shortest distance between similarity centre points.

For small features that are entirely enclosed by a single larger feature, this process will result in the same generalization solution presented by any of the geometric-based methods of the generalize_classified_raster tool. However, for small features that have more than one neighbour, this tool may provide a superior generalization solution than those based solely on geometric information.

For a video tutorial on how to use the generalize_with_similarity tool, see this YouTube video.

See Also

generalize_classified_raster, majority_filter, image_segmentation

Function Signature

def generalize_with_similarity(self, raster: Raster, similarity_rasters: List[Raster], area_threshold: int = 5) -> Raster: ...

generating_function

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the generating function (Shary and Stepanov, 1991) from a digital elevation model (DEM). Florinsky (2016) describes generating function as a measure for the deflection of tangential curvature from loci of extreme curvature of the topographic surface. Florinsky (2016) demonstrated the application of this variable for identifying landscape structural lines, i.e. ridges and thalwegs, for which the generating function takes values near zero. Ridges coincide with divergent areas where generating function is approximately zero, while thalwegs are associated with convergent areas with generating function values near zero. This variable has positive values, zero or greater and is measured in units of m-2.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Raw generating function values are often challenging to visualize given their range and magnitude, and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

This tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems, however, this tool cannot use the same 3x3 polynomial fitting method for equal angle grids, also described by Florinsky (2016), that is used by the other curvature tools in this software. That is because generating function uses 3rd order partial derivatives, which cannot be calculated using the 9 elevations in a 3x3; more elevation values are required (i.e. a 5x5 window). Thus, this tool uses the same 5x5 method used for DEMs in projected coordinate systems, and calculates the average linear distance between neighbouring cells in the vertical and horizontal directions using the Vincenty distance function. Note that this may cause a notable slow-down in algorithm performance and has a lower accuracy than would be achieved using an equal angle method, because it assumes a square pixel (in linear units).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Koenderink, J. J., and Van Doorn, A. J. (1992). Surface shape and curvature scales. Image and vision computing, 10(8), 557-564.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

Shary P. A. and Stepanov I. N. (1991) Application of the method of second derivatives in geology. Transactions (Doklady) of the USSR Academy of Sciences, Earth Science Sections 320: 87–92.

See Also

shape_index, minimal_curvature, maximal_curvature, tangential_curvature, profile_curvature, mean_curvature, gaussian_curvature

horizon_area

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

This tool calculates horizon area, i.e., the area of the horizon polygon centered on each point in an input digital elevation model (DEM). Horizon area is therefore conceptually related to the
viewhed and visibility_index functions. Horizon area can be thought of as an approximation of the viewshed area and is therefore faster to calculate a spatial distribution of compared with the visibility index. Horizon area is measured in hectares.

The user must specify an input DEM (dem), the azimuth fraction (az_fraction), the maximum search distance (max_dist), and the height offset of the observer (observer_hgt_offset). The input DEM should usually be a digital surface model (DSM) that contains significant off-terrain objects. Such a model, for example, could be created using the first-return points of a LiDAR data set, or using the lidar_digital_surface_model tool. The azimuth fraction should be an even divisor of 360-degrees and must be between 1-45 degrees.

The tool operates by calculating horizon angle (see horizon_angle) rasters from the DSM based on the user-specified azimuth fraction (az_fraction). For example, if an azimuth fraction of 15-degrees is specified, horizon angle rasters would be calculated for the solar azimuths 0, 15, 30, 45... A horizon angle raster evaluates the vertical angle between each grid cell in a DSM and a distant obstacle (e.g. a mountain ridge, building, tree, etc.) that obscures the view in a specified direction. In calculating horizon angle, the user must specify the maximum search distance (max_dist), in map units, beyond which the query for higher, more distant objects will cease. This parameter strongly impacts the performance of the function, with larger values resulting in significantly longer processing-times.

With each evaluated direction, the coordinates of the horizon point is determined, using the azimuth and the distance to horizon, with each point then serving as a vertex in a horizon polygon. The shoelace algorithm is used to measure the area of each horizon polgon for the set of grid cells, which is then reported in the output raster.

The observer_hgt_offset parameter can be used to add an increment to the source cell's elevation. For example, the following image shows the spatial pattern derived from a LiDAR DSM using observer_hgt_offset = 0.0:

Notice that there are several places, plarticularly on the flatter rooftops, where the local noise in the LiDAR DEM, associated with the individual scan lines, has resulted in a noisy pattern in the output. By adding a small height offset of the scale of this noise variation (0.15 m), we see that most of this noisy pattern is removed in the output below:

As another example, in the image below, the observer_hgt_offset parameter has been used to measure the pattern of the index at a typical human height (1.7 m):

Notice how at this height a much larger area becomes visible on some of the flat rooftops where a guard wall prevents further viewing areas at shorter observer heights.

See Also

sky_view_factor, average_horizon_distance, openness, lidar_digital_surface_model, horizon_angle

Function Signature

def horizon_area(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), observer_hgt_offset: float = 0.0) -> Raster: ...

horizontal_excess_curvature

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the horizontal excess curvature from a digital elevation model (DEM). Horizontal excess curvature is the difference of tangential (horizontal) and minimal curvatures at a location (Shary, 1995). This variable has positive values, zero or greater. Florinsky (2017) states that horizontal excess curvature measures the extent to which the bending of a normal section tangential to a contour line is larger than the minimal bending at a given point of the surface. Horizontal excess curvature is measured in units of m-1.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1995) Land surface in gravity points classification by a complete system of curvatures. Mathematical Geology 27: 373–390.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

tangential_curvature, profile_curvature, minimal_curvature, maximal_curvature, mean_curvature, gaussian_curvature

hydrologic_connectivity

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Theory

This tool calculates two indices related to hydrologic connectivity within catchments, the downslope unsaturated length (DUL) and the upslope disconnected saturated area (UDSA). Both of these hydrologic indices are based on the topographic wetness index (wetness_index), which measures the propensity for a site to be saturated to the surface, and therefore, to contribute to surface runoff. The wetness index (WI) is commonly used in hydrologic modelling, and famously in the TOPMODEL, to simulate variable source area (VSA) dynamics within catchments. The VSA is a dynamic region of surface-saturated soils within catchments that contributes fast overland flow to downslope streams during periods of precipitation. As a catchment's soil saturation deficit decreases ('wetting up'), areas with increasingly lower WI values become saturated to the surface. That is, areas of high WI are the first to become saturated and as the moisture deficit decreases, lower WI-valued cells become saturated, increasing the spatial extent of the source area. As a catchment dries out, the opposite effect occurs. The distribution of WI can therefore be used to map the spatial dyanamics of the VSA. However, the assumption in the TOPMODEL is that any rainfall over surface saturated areas will contribute to fast overland flow pathways and to stream discharge within the time step.

This method therefore implicitly assumes that all surface saturated grid cells are connected by continuously saturated areas along the downslope flow path connecting the cells to the stream. By comparison, Lane et al. (2004) proposed a modified WI, known as the network index (NI), which allowed for the modelling of disconnected, non-contributing saturated areas. The NI is essentially the downslope minimum WI. Grid cells for which WI > NI are likely to be disconnected during certain conditions from downslope streams, while similarly WI-valued cells are contributing. During these periods, any surface runoff from these cells is likely to contribute to downslope re-infilitration rather than directly to stream discharge via overland flow. This has implications for the timing and quality of stream discharge.

The DUL and UDSA indices extend the notion of the NI by mapping areas within catchments that are likely, at least during certain periods, to be sites of disconnected, non-contributing saturated areas and sites of re-infiltation respectively. These combined indices allow hydrologists to study the hydrologic connectivity and disconnectivity among areas within catchments.

The DUL (see image below) is defined for a grid cell as the number of downslope cells with a WI value lower than the current cell. Areas with non-zero DUL are likely to become fully saturated, and to contribute to overland flow, before they are directly connected to downslope areas and can contribute to stream flow. Under the appropriate catchment saturation deficit conditions, these are sites of disconnected, non-contributing saturated areas. When non-zero DUL cells are initially saturated, their precipitation excess will contribute to downslope re-infiltation, lessening the catchment's overall saturation deficit, rather than contributing to stormflow.

The UDSA (see image below) is defined for a grid cell as the number of upslope cells with a WI value higher than the current cell. Areas with non-zero UDSA are likely to have saturation deficits that are at least partly satisfied by local re-infiltation of overland flow from upslope areas. These non-zero UDSA cells are key sites causing the hydrologic disconnectivity of the catchment during certain conditions.

In the original Lane et al. (2004) NI paper, the authors state that the calculation of the index requires a unique, single downslope flow path for each grid cell. Therefore, the authors used the D8 single-direction flow algorithm to calculate NI. While the D8 method works well to model flow in convergent and channelized areas, it is generally recognized as a poor method for estimating WI on hillslopes, where divergent, non-chanellized flow dominates. Furthermore, the use of the D8 algorithm implied that the only way that WI can decrease downslope is for slope gradient to decrease, since specific contributing area only increases downslope with the D8 method. However, theoretically, WI may also decrease downslope due to flow dispersion, which allows for the upslope area (a surrogate for discharge) to be spread over a larger downslope dispersal area. The original NI formulation could not account for this effect.

Thus, in the implementation of the hydrologic_connectivity tool, WI is first calculated using the multiple flow-direction (MFD) algorithm described by Quinn et al. (1995), which is commonly used to estimate WI. While this implies that there are a multitude of potential flow pathways connecting each grid cell to a downstream location, in reality, if the flow path that follows the path of maximum WI issuing from a cell experiences a reduction in WI (to the point where it becomes less than the issuing cell's WI), then we can safely assume that re-infiltration occurs and the issuing cell is at times disconnected from downslope sites. Thus, after WI has been estimated using the quinn_flow_accumulation algorithm, flow directions, which are used to calculate upslope and downslope flow paths for calculating the two indices, are approximated by identifying the downslope neighbour of highest WI value for each grid cell.

Operation

The user must specify the name of the input digital elevation model (DEM; dem), and the output DUL and UDSA rasters (output1 and output2). The DEM must have been hydrologically corrected to remove all spurious depressions and flat areas. DEM pre-processing is usually achived using either the breach_depressions_least_cost (also breach_depressions_least_cost) or fill_depressions tool. The remaining two parameters are associated with the calculation of the Quinn et al. (1995) flow accumulation (quinn_flow_accumulation), used to estimate WI. A value must be specified for the exponent parameter (exponent), a number that controls the degree of dispersion in the flow-accumulation grid. A lower value yields greater apparent flow dispersion across divergent hillslopes. The exponent value (h) should probably be less than 10.0 and values between 1 and 2 are most common. The following equations are used to calculate the portion flow (Fi) given to each neighbour, i:

Fi = Li(tanβ)p / Σi=1n[Li(tanβ)p]

p = (A / threshold + 1)h

Where Li is the contour length, and is 0.5×grid size for cardinal directions and 0.354×grid size for diagonal directions, n = 8, and represents each of the eight neighbouring grid cells, and, A is the flow accumultation value assigned to the current grid cell, that is being apportioned downslope. The non-dispersive, channel initiation threshold (threshold) is a flow-accumulation value (measured in upslope grid cells, which is directly proportional to area) above which flow dispersion is no longer permited. Grid cells with flow-accumulation values above this threshold will have their flow routed in a manner that is similar to the D8 single-flow-direction algorithm, directing all flow towards the steepest downslope neighbour. This is usually done under the assumption that flow dispersion, whilst appropriate on hillslope areas, is not realistic once flow becomes channelized. Importantly, the threshold parameter sets the spatial extent of the stream network, with lower values resulting in more extensive networks.

References

Beven K.J., Kirkby M.J., 1979. A physically-based, variable contributing area model of basin hydrology. Hydrological Sciences Bulletin 24: 43–69.

Lane, S.N., Brookes, C.J., Kirkby, M.J. and Holden, J., 2004. A network‐index‐based version of TOPMODEL for use with high‐resolution digital topographic data. Hydrological processes, 18(1), pp.191-201.

Quinn, P. F., K. J. Beven, Lamb, R. 1995. The in (a/tanβ) index: How to calculate it and how to use it within the topmodel framework. Hydrological processes 9(2): 161-182.

See Also

wetness_index, quinn_flow_accumulation

image_segmentation

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool is used to segment a mult-spectral image data set, or multi-dimensional data stack. The algorithm is based on region-growing operations. Each of the input images are transformed into standard scores prior to analysis. The total multi-dimensional distance between each pixel and its eight neighbours is measured, which then serves as a priority value for selecting potential seed pixels for the region-growing operations, with pixels exhibited the least difference with their neighbours more likely to serve as seeds. The region-growing operations initiate at seed pixels and grows outwards, connecting neighbouring pixels that have a multi-dimensional distance from the seed cell that is less than a threshold value. Thus, the region-growing operations attempt to identify contiguous, relatively homogeneous objects. The algorithm stratifies potential seed pixels into bands, based on their total difference with their eight neighbours. The user may control the size and number of these bands using the threshold and steps parameters respectively. Increasing the magnitude of the threshold parameter will result in fewer mapped objects and vice versa. All pixels that are not assigned to an object after the seeding-based region-growing operations are then clumped simply based on contiguity.

It is commonly the case that there will be a large number of very small-sized objects identified using this approach. The user may optionally specify that objects that are less than a minimum area (expressed in pixels) be eliminated from the final output raster. The min_area parameter must be an integer between 1 and 8. In cleaning small objects from the output, the pixels belonging to these smaller features are assigned to the most homogeneous neighbouring object.

The input rasters (inputs) may be bands of satellite imagery, or any other attribute, such as measures of texture, elevation, or other topographic derivatives, such as slope. If satellite imagery is used as inputs, it can be beneficial to pre-process the data with an edge-preserving low-pass filter, such as the bilateral_filter and edge_preserving_mean_filter tools.

See Also

bilateral_filter, edge_preserving_mean_filter

image_slider

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool creates an interactive image slider from two input images (input1 and input2). An image slider is an interactive visualization of two overlapping images, in which the user moves the position of a slider bar to hide or reveal one of the overlapping images. The output (output) is an HTML file. Each of the two input images may be rendered in one of several available palettes. If the input image is a colour composite image, no palette is required. Labels may also be optionally associated with each of the images, displayed in the upper left and right corners. The user must also specify the image height (height) in the output file. Note that the output is simply HTML, CSS, and javascript code, which can be readily embedded in other documents.

The following is an example of what the output of this tool looks like. Click the image for an interactive example.

improved_ground_point_filter

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

This function provides a faster alternative to the lidar_ground_point_filter algorithm, provided in the free version of Whitebox Workflows, for the extraction of ground points from within a LiDAR point cloud. The algorithm works by placing a grid overtop of the point cloud of a specified resolution (block_size, in xy-units) and identifying the subset of lidar points associated with the lowest position in each block. A raster surface is then created by TINing these points. The surface is further processed by removing any off-terrain objects (OTOs), including buildings smaller than the max_building_size parameter (xy-units). Removing OTOs also requires the user to specify the value of a slope_threshold, in degrees. Finally, the algorithm then extracts all of the points in the input LiDAR point cloud (input) that are within a specified absolute vertical distance (elev_threshold) of this surface model.

Conceptually, this method of ground-point filtering is somewhat similar in concept to the cloth-simulation approach of Zhang et al. (2016). The difference is that the cloth is first fitted to the minimum surface with infinite flexibility and then the rigidity of the cloth is subsequently increased, via the identification and removal of OTOs from the minimal surface. The slope_threshold parameter effectively controls the eventual rigidity of the fitted surface.

Compared with the lidar_ground_point_filter algorithm, the improved_ground_point_filter algorithm is generally far faster and is able to more effectively remove points associated with larger buildings. Removing large buildings from point clouds with the lidar_ground_point_filter algorithm requires use of very large search distances, which slows the operation considerably. However, lidar_ground_point_filter is perhaps more flexible overall because it provides the option to either extract ground points or simply to classify them. The improved_ground_point_filter function, by comparison, only allows for the extraction of ground points (i.e., filtering) and not the classification of ground points. Thus, this function is more suited to the efficient creation of ground point clouds of bare-earth digital elevation models (i.e., digital terrain models) from unclassified LiDAR data assets.

As a comparison of the two available methods, one test tile of LiDAR containing numerous large buildings and abundant vegetation required 600.5 seconds to process on the test system using the lidar_ground_point_filter algorithm (removing all but the largest buildings) and 9.8 seconds to process using the improved_ground_point_filter algorithm (with complete building removal), i.e., 61x faster.

The original test LiDAR tile, containing abundant vegetation and buildings:

The result of applying the lidar_ground_point_filter function, with a search radius of 25 m and max inter-point slope of 15 degrees:

The result of applying the improved_ground_point_filter method, with block_size = 1.0 m, max_building_size = 150.0 m, slope_threshold = 15.0 degrees, and elev_threshold = 0.15 m:

References:

Zhang, W., Qi, J., Wan, P., Wang, H., Xie, D., Wang, X., & Yan, G. (2016). An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote sensing, 8(6), 501.

See Also:

lidar_ground_point_filter

Function Signature

def improved_ground_point_filter(self, input: Lidar, block_size = 1.0, max_building_size = 150.0, slope_threshold = 15.0, elev_threshold = 0.15) -> Lidar: ...

inverse_pca

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool takes a two or more component images (inputs), and the principal component analysis (PCA) report derived using the principal_component_analysis tool, and performs the inverse PCA transform to derive the original series of input images. This inverse transform is frequently performed to reduce noise within a multi-spectral image data set. With a typical PCA transform, high-frequency noise will commonly map onto the higher component images. By excluding one or more higher-valued component images from the input component list, the inverse transform can produce a set of images in the original coordinate system that exclude the information contained within component images excluded from the input list. Note that the number of output images will also equal the number of original images input to the principal_component_analysis tool. The output images will be named automatically with a "inv_PCA_image" suffix.

See Also

principal_component_analysis

knn_classification

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a supervised k-nearest neighbour (k-NN) classification using multiple predictor rasters (inputs), or features, and training data (training). It can be used to model the spatial distribution of class data, such as land-cover type, soil class, or vegetation type. The training data take the form of an input vector Shapefile containing a set of points or polygons, for which the known class information is contained within a field (field) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space. The algorithm works by identifying a user-defined number (k, -k) of feature-space neighbours from the training set for each grid cell. The class that is then assigned to the grid cell in the output raster (output) is then determined as the most common class among the set of neighbours. Note that the knn_regression tool can be used to apply the k-NN method to the modelling of continuous data.

The user has the option to clip the training set data (clip). When this option is selected, each training pixel for which the estimated class value, based on the k-NN procedure, is not equal to the known class value, is removed from the training set before proceeding with labelling all grid cells. This has the effect of removing outlier points within the training set and often improves the overall classification accuracy.

The tool splits the training data into two sets, one for training the classifier and one for testing the classification. These test data are used to calculate the overall accuracy and Cohen's kappa index of agreement, as well as to estimate the variable importance. The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, the tool behaves stochastically, and will result in a different model each time it is run.

Note that the output image parameter (output) is optional. When unspecified, the tool will simply report the model accuracy statistics and variable importance, allowing the user to experiment with different parameter settings and input predictor raster combinations to optimize the model before applying it to classify the whole image data set.

Like all supervised classification methods, this technique relies heavily on proper selection of training data. Training sites are exemplar areas/points of known and representative class value (e.g. land cover type). The algorithm determines the feature signatures of the pixels within each training area. In selecting training sites, care should be taken to ensure that they cover the full range of variability within each class. Otherwise the classification accuracy will be impacted. If possible, multiple training sites should be selected for each class. It is also advisable to avoid areas near the edges of class objects (e.g. land-cover patches), where mixed pixels may impact the purity of training site values.

After selecting training sites, the feature value distributions of each class type can be assessed using the evaluate_training_sites tool. In particular, the distribution of class values should ideally be non-overlapping in at least one feature dimension.

The k-NN algorithm is based on the calculation of distances in multi-dimensional space. Feature scaling is essential to the application of k-NN modelling, especially when the ranges of the features are different, for example, if they are measured in different units. Without scaling, features with larger ranges will have greater influence in computing the distances between points. The tool offers three options for feature-scaling (scaling), including 'None', 'Normalize', and 'Standardize'. Normalization simply rescales each of the features onto a 0-1 range. This is a good option for most applications, but it is highly sensitive to outliers because it is determined by the range of the minimum and maximum values. Standardization rescales predictors using their means and standard deviations, transforming the data into z-scores. This is a better option than normalization when you know that the data contain outlier values; however, it does does assume that the feature data are somewhat normally distributed, or are at least symmetrical in distribution.

Because the k-NN algorithm calculates distances in feature-space, like many other related algorithms, it suffers from the curse of dimensionality. Distances become less meaningful in high-dimensional space because the vastness of these spaces means that distances between points are less significant (more similar). As such, if the predictor list includes insignificant or highly correlated variables, it is advisable to exclude these features during the model-building phase, or to use a dimension reduction technique such as principal_component_analysis to transform the features into a smaller set of uncorrelated predictors.

For a video tutorial on how to use the knn_classification tool, see this YouTube video.

Memory Usage

The peak memory usage of this tool is approximately 8 bytes per grid cell × # predictors.

See Also

knn_regression, random_forest_classification, svm_classification, parallelepiped_classification, evaluate_training_sites

Function Signature

def knn_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, scaling_method: str = "none", k: int = 5, test_proportion: float = 0.2, use_clipping: bool = False, create_output: bool = False) -> Optional[Raster]: ...

knn_regression

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a supervised k-nearest neighbour (k-NN) regression analysis using multiple predictor rasters (inputs), or features, and training data (training). It can be used to model the spatial distribution of continuous data, such as soil properties (e.g. percent sand/silt/clay). The training data take the form of an input vector Shapefile containing a set of points, for which the known outcome information is contained within a field (field) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space. The algorithm works by identifying a user-defined number (k, -k) of feature-space neighbours from the training set for each grid cell. The value that is then assigned to the grid cell in the output raster (output) is then determined as the mean of the outcome variable among the set of neighbours. The user may optionally choose to weight neighbour outcome values in the averaging calculation, with weights determined by the inverse distance function (weight). Note that the knn_classification tool can be used to apply the k-NN method to the modelling of categorical data.

The tool splits the training data into two sets, one for training the model and one for testing the prediction. These test data are used to calculate the regression accuracy statistics, as well as to estimate the variable importance. The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, the tool behaves stochastically, and will result in a different model each time it is run.

Note that the output image parameter (output) is optional. When unspecified, the tool will simply report the model accuracy statistics and variable importance, allowing the user to experiment with different parameter settings and input predictor raster combinations to optimize the model before applying it to model the outcome variable across the whole region defined by image data set.

The k-NN algorithm is based on the calculation of distances in multi-dimensional space. Feature scaling is essential to the application of k-NN modelling, especially when the ranges of the features are different, for example, if they are measured in different units. Without scaling, features with larger ranges will have greater influence in computing the distances between points. The tool offers three options for feature-scaling (scaling), including 'None', 'Normalize', and 'Standardize'. Normalization simply rescales each of the features onto a 0-1 range. This is a good option for most applications, but it is highly sensitive to outliers because it is determined by the range of the minimum and maximum values. Standardization rescales predictors using their means and standard deviations, transforming the data into z-scores. This is a better option than normalization when you know that the data contain outlier values; however, it does does assume that the feature data are somewhat normally distributed, or are at least symmetrical in distribution.

Because the k-NN algorithm calculates distances in feature-space, like many other related algorithms, it suffers from the curse of dimensionality. Distances become less meaningful in high-dimensional space because the vastness of these spaces means that distances between points are less significant (more similar). As such, if the predictor list includes insignificant or highly correlated variables, it is advisable to exclude these features during the model-building phase, or to use a dimension reduction technique such as principal_component_analysis to transform the features into a smaller set of uncorrelated predictors.

Memory Usage

The peak memory usage of this tool is approximately 8 bytes per grid cell × # predictors.

See Also

knn_classification, random_forest_regression, svm_regression, principal_component_analysis

lidar_contour

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to create a contour (i.e. isolines of elevation values) vector coverage from an input LiDAR points data set (input). The tool works by first creating a triangulation of the input LiDAR points. The user must specify the contour interval (interval), or vertical spacing between contour lines. The smooth parameter can be used to increase or decrease the degree to which contours are smoothed. This parameter should be an odd integer value (0, 1, 3, 5...), with 0 indicating no smoothing. The tool can interpolate contours based on the LiDAR point elevation values, intensity data, or the user data field (parameter), with 'elevation' as the default parameter. LiDAR points may be excluded from the contouring process based on a number of criteria, including their return value (returns, which may be 'all', 'last', 'first'), their class value (exclude_cls), and whether they fall outside of a user-specified elevation range (minz and maxz). The optional max_triangle_edge_length parameter can be used to exclude the output of contours within areas that are sparsely populated areas of the data set, where the triangles formed by the Delaunay triangulation are too large. This is often the case within bodies of water; long and narrow triangular facets can also occur within the concave portions of the hull, or polygon enclosing, the points, when the data have an irregular shaped extent. Setting this parameter can help alleviate the problem of contouring beyond the data footprint.

Like many of the LiDAR tools, both the input and output parameters are optional. If these parameters are not specified by the user, the tool will search for all LAS files contained within the current WhiteboxTools working directory. This feature can be useful when you need to contour a large number of LiDAR tiles. This batch processing mode enables the tool to enable parallel data processing, which can significantly improve the efficiency of data conversion for datasets with many LiDAR tiles. When run in this batch mode, the output file (output) also need not be specified; the tool will instead create an output file with the same name as each input LiDAR file, but with the .shp extension.

It is important to note that contouring is better suited to well-defined surfaces (e.g. the ground surface or building heights), rather than volume features, such as vegetation, which tend to produce extremely complex contour sets. It is advisable to use this tool with last-returns and/or ground-classified point returns. If the input data set does not contain ground classification, consider pre-processing with the lidar_ground_point_filter tool.

See Also

contours_from_points, contours_from_raster, lidar_ground_point_filter

lidar_eigenvalue_features

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to measure eigenvalue-based features that describe the characteristics of the local neighbourhood surrounding each point in an input LiDAR file (input). These features can then be used in point classification applications, or as the basis for point filtering (filter_lidar) or modifying point properties (modify_lidar).

The algorithm begins by using the x, y, z coordinates of the points within a local spherical neighbourhood to calculate a covariance matrix. The three eigenvalues λ1, λ2, λ3 are then derived from the covariance matrix decomposition such that λ1 > λ2 > λ3. The eigenvalues are then used to describe the extent to which the neighbouring points can be characterized by a linear, planar, or volumetric distribution, by calculating the following three features:

linearity = (λ1 - λ2) / λ1

planarity = (λ2 - λ3) / λ1

sphericity = λ3 / λ1

In the case of a neighbourhood containing a 1-dimensional line, the first of the three components will possess most of data variance, with very little contained within λ2 and λ3, and linearity will be nearly equal to 1.0. If the local neighbourhood contains a 2-dimensional plane, the first two components will possess most of the variance, with little variance within λ3, and planarity will be nearly equal to 1.0. Lastly, in the case of a 3-dimensional, random volumetric point distribution, each of the three components will be nearly equal in magnitude and sphericity will be nearly equal to 1.0.

Researchers in the field of LiDAR point classification also frequently define two additional eigenvalue-based features, the omnivariance (low values correspond to planar and linear regions and higher values occur for areas with a volumetric point distribution, such as vegetation), and the eigentropy, which is related to the Shannon entropy and is a measure of the unpredictability of the distribution of points in the neighbourhood:

omnivariance = (λ1 ⋅ λ2 ⋅ λ3)1/3

eigentropy = -e1 ⋅ lne1 - e2 ⋅ lne2 - e3 ⋅ lne3

where e1, e2, and e3 are the normalized eigenvalues.

In addition to the eigenvalues, the eigendecomposition of the symmetric covariance matrix also yields the three eigenvectors, which describe the transformation coefficients of the principal components. The first two eigenvectors represent the basis of the plane resulting from the orthogonal regression analysis, while the third eigenvector represents the plane normal. From this normal, it is possible to calculate the slope of the plane, as well as the orthogonal distance between each point and the neighbourhood fitted plane, i.e. the point residual.

This tool outputs a binary file (*.eigen; output) that contains a total of 10 features for each point in the input file, including the point_num (for reference), lambda1, lambda2, lambda3, linearity, planarity, sphericity, omnivariance, eigentropy, slope, and residual. Users should bear in mind that most of these features describe the properties of the distribution of points within a spherical neighbourhood surrounding each point in the input file, rather than a characteristic of the point itself. The only one of the ten features that is a point property is the residual. Points for which the planarity value is high and the residual value is low may be assumed to be part of the plane that dominate the structure of their neighbourhoods. In addition to the binary data *.eigen file, the tool will also output a sidecar file, with a *.eigen.json extension, which describes the structure of the raw binary data file.

Local neighbourhoods are spherical in shape and the size of each neighbourhood is characterized by the num_neighbours and radius parameters. If the optional num_neighbours parameter is specified, the size of the neighbourhood will vary by point, increasing or decreasing to encompass the specified number of neighbours (notice that this value does not include the point itself). If the optional radius parameter is specified in addition to a number of neighbours, the specified radius value will serve as a upper-bound and neighbouring points that are beyond this radial distance to the centre point will be excluded. If a radius search distance is specified but the num_neighbours parameter is not, then a constant search distance will be used for each point in the input file, resulting in varying number of points within local neighbourhoods, depending on local point densities. If point density varies significantly in the input file, then use of the num_neighbours parameter may be advisable. Notice that at least one of the two parameters must be specified. In cases where the number of neighbouring points is fewer than eight, each of the output feature values will be set to 0.0.

Note that if the user does not specify the optional input LiDAR file, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful for processing a large number of LiDAR files in batch mode.

The binary data file (*.eigen) can be used directly by the filter_lidar and modify_lidar tools, and will be automatically read by the tools when the *.eigen and *.eigen.json files are present in the same folder as the accompanying source LiDAR file. This allows users to apply data filters, or to modify point properties, using these point neighbourhood features. For example, the statement, rgb=(int(linearity*255), int(planarity*255), int(sphericity*255)), used with the modify_lidar tool, can render the point RGB colour values based on some of the eigenvalue features, allowing users to visually identify linear features (red), planar features (green), and volumetric regions (blue).

Additionally, these features data can also be readily incorporated into a Python-based point analysis or classification. As an example, the following script reads in a *.eigen binary data file for direct manipulation and analysis:

import numpy as np

dt = np.dtype([
('point_num', '<u8'),
('lambda1', '<f4'),
('lambda2', '<f4'),
('lambda3', '<f4'),
('linearity', '<f4'),
('planarity', '<f4'),
('sphericity', '<f4'),
('omnivariance', '<f4'),
('eigentropy', '<f4'),
('slope', '<f4'),
('resid', '<f4')
])

with open('/Users/johnlindsay/Documents/data/aaa2.eigen', 'rb') as f:
b = f.read()

pt_features = np.frombuffer(b, dt)

# Print the first 100 point features to demonstrate
for i in range(100):
print(f"{pt_features['point_num'][i]} {pt_features['linearity'][i]} {pt_features['planarity'][i]} {pt_features['sphericity'][i]}")


print("Done!")
References

Chehata, N., Guo, L., & Mallet, C. (2009). Airborne lidar feature selection for urban classification using random forests. In Laser Scanning IAPRS, Vol. XXXVIII, Part 3/W8 – Paris, France, September 1-2, 2009.

Gross, H., Jutzi, B., & Thoennessen, U. (2007). Segmentation of tree regions using data of a full-waveform laser. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36(part 3), W49A.

Niemeyer, J., Mallet, C., Rottensteiner, F., & Sörgel, U. (2012). Conditional Random Fields for the Classification of LIDAR Point Clouds. In XXII ISPRS Congress at Melbourne, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. 3).

West, K. F., Webb, B. N., Lersch, J. R., Pothier, S., Triscari, J. M., & Iverson, A. E. (2004). Context-driven automated target detection in 3D data. In Automatic Target Recognition XIV (Vol. 5426, pp. 133-143). SPIE.

See Also

filter_lidar, modify_lidar, sort_lidar, split_lidar

lidar_point_return_analysis

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This performs a quality control check on the return values of points in a LiDAR file. In particular, the tool will search for missing point returns, duplicate point returns, and points for which the return number (r) is larger than the encoded number of returns (n), all of which may be indicative of processing or encoding errors in the input file.

The user must specify the name of the input LiDAR file (input), and may optionally specify an output LiDAR file (output). If no output file is specified, only the text report is generated by the tool. If an output is specified, the tool will create an output LiDAR file for which missing returns are assigned class 13, duplicate returns are assigned class 14, points that are both, part of a missing series and are duplicate returns, are classed 15, and all other non-problemmatic points are assigned class 1. Note, those points designated as missing in the output image are clearly not so much missing as they are part of a sequence of points that contain missing returns. Missing points are apparent when the first point in a series does not have r = 1, when the last point does not have r = n, or the series is non-sequential (e.g. 1/3, 3/3, but no 2/3). This condition may occur because returns are split between tiles. However, when sequences with missing points are not located near the edges of tiles, it is usually an indication that either point filtering has taken place during pre-processing or that there is been some kind of processing or encoding error.

Duplicate points are defined as points that share the same time, scanner channel, r, and n. Note that these points may have different x, y, z coordinates. Duplicate points are always an indication of a processing or encoding error. For example, it may indicate that the scanner channel information from a multi-channel LiDAR sensor has not been encoded when creating the file or has been lost.

No point should have r > n. This always indicates some kind of processing or encoding error when it occurs.

The following is a sample output report generated by this tool:

***************************************
* Welcome to LidarPointReturnAnalysis *
***************************************
The Global Encoding for this file indicates that
the point returns are not synthetic.

Missing Returns:
2441636 (16.336 percent) points are missing

| r | n | Missing Pts |
|---|---|-------------|
| 1 | 2 |     1127770 |
| 2 | 2 |         817 |
| 1 | 3 |      823240 |
| 2 | 3 |         569 |
| 3 | 3 |         718 |
| 1 | 4 |      285695 |
| 2 | 4 |      142890 |
| 3 | 4 |         142 |
| 4 | 4 |         213 |
| 1 | 5 |       29772 |
| 2 | 5 |       19848 |
| 3 | 5 |        9928 |
| 4 | 5 |          18 |
| 5 | 5 |          16 |


Duplicate Returns:
4311021 (28.844 percent) points are duplicates

| r | n | Duplicates |
|---|---|------------|
| 1 | 1 |    2707083 |
| 1 | 2 |     332028 |
| 2 | 2 |     663717 |
| 1 | 3 |      70619 |
| 2 | 3 |     211834 |
| 3 | 3 |     282348 |
| 1 | 4 |       2856 |
| 2 | 4 |       8568 |
| 3 | 4 |      14280 |
| 4 | 4 |      17136 |
| 1 | 5 |         23 |
| 2 | 5 |         69 |
| 3 | 5 |        115 |
| 4 | 5 |        161 |
| 5 | 5 |        184 |


Return Greater Than Num. Returns:
0 (0.000 percent) points have r > n

Writing output LAS file...
Complete!
Elapsed Time (including I/O): 1.959s

lidar_sibson_interpolation

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool interpolates LiDAR files using Sibson's interpolation method, sometimes referred to as natural-neighbour interpolation (not to be confused with nearest-neighbour interpolation, lidar_nearest_neighbour_gridding). Sibon's method is based on assigning weight to points for which inserting a grid point would result in captured areas of the Voronoi tessellation of the input point set. The larger the captured area, the higher the weight assigned to the associated point. One of the main advantages of this natural neighbour approach to interpolation over similar techniques, such as inverse-distance weighting (IDW lidar_idw_interpolation), is that there is no need to specify a search distance or other interpolation weighting parameters. Sibson's approach frequently provides a very suitable interpolation for LiDAR data. The method requires the calculation of a Delaunay triangulation, from which the Voronoi tessellation is calculated.

The user must specify the value of the IDW weight parameter (weight). The output grid can be based on any of the stored LiDAR point parameters (parameter), including elevation (in which case the output grid is a digital elevation model, DEM), intensity, class, return number, number of returns, scan angle values, and user data values. Similarly, the user may specify which point return values (returns) to include in the interpolation, including all points, last returns (including single return points), and first returns (including single return points).

The user must specify the grid resolution of the output raster (resolution), and optionally, the name of the input LiDAR file (input) and output raster (output). Note that if an input LiDAR file (input) is not specified by the user, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful when you need to interpolate a DEM for a large number of LiDAR files. This batch processing mode enables the tool to include a small buffer of points extending into adjacent tiles when interpolating an individual file. This can significantly reduce edge-effects when the output tiles are later mosaicked together. When run in this batch mode, the output file (output) also need not be specified; the tool will instead create an output file with the same name as each input LiDAR file, but with the .tif extension. This can provide a very efficient means for processing extremely large LiDAR data sets.

Users may excluded points from the interpolation based on point classification values, which follow the LAS classification scheme. Excluded classes are specified using the exclude_cls parameter. For example, to exclude all vegetation and building classified points from the interpolation, use --exclude_cls='3,4,5,6'. Users may also exclude points from the interpolation if they fall below or above the minimum (minz) or maximum (maxz) thresholds respectively. This can be a useful means of excluding anomalously high or low points. Note that points that are classified as low points (LAS class 7) or high noise (LAS class 18) are automatically excluded from the interpolation operation.

See Also

lidar_tin_gridding, lidar_nearest_neighbour_gridding, lidar_idw_interpolation

local_hypsometric_analysis

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the hypsometric integral from the elevation distribution contained within the local neighbourhood surrounding each grid cell in an input (input) DEM. The hyspometric integral (HI) is the area under the hypsometric curve, which is a plot that relates elevation and area. This plot is a cumulative distribution function with elevation expressed as a proportion of maximum elevation and area expressed as the proportion of the area above. Hypsometry, or area-altitude analysis, is commonly used by geomorphologists and geologists to characterize the erosional history of drainage basins. The HI, ranging between 0 and 1, expresses the volume of land that lies above the lowest point within an area, and thus has not been eroded. Relatively low HI values are indicative of more strongly eroded surfaces.

Some researchers (e.g. Pérez‐Peña et al., 2009) have demonstrated the usefulness of applying hypsometry in a spatially distributed fashion, rather than aggregated by basins, as it is typically applied. While Pérez‐Peña et al. (2009) characterized spatial distributions of HI using coarse grids overlayed overtop a digital elevation model (DEM), this tool uses a filter-based approach instead. Each grid cell in the input DEM (input) has an individual HI calculation based on the elevation distribution within a moving kernel. HI values are calculated using the elevation-relief ratio method described by Pike and Wilson (1971).

In actuality, the tool uses a multi-scale approach, much like many of the other tools within the Geomorphometric Analysis toolbox (e.g. max_elevation_deviation, multiscale_std_dev_normals), such that the neighbourhood size is varied according to a range defined by user-specified input parameters. The HI that is reported within each grid cell in the output raster is the minimum HI value measured for each of the tested scales, defined by lower (rL) and upper (rU) ranges.

HImin=min{HI(r):r=rL...rU},

In this way, it represents a heterogenous, locally scale optimized map of HI distributions. A nonlinear scale sampling interval is used by this tool to ensure that the scale sampling density is higher for short scale ranges, where there is often greater variability in HI values, and coarser at longer tested scales, such that:

ri = rL + [step × (i - rL)]p

Where ri is the filter radius for step i and p is the nonlinear scaling factor (step_nonlinearity) and a step size (step) of step.

There are two outputs generated from this tool, including the HImin raster (out_mag) and the rmin scale raster (out_scale).

References

Pérez‐Peña, J. V., Azañón, J. M., Booth‐Rea, G., Azor, A., and Delgado, J. (2009). Differentiating geology and tectonics using a spatial autocorrelation technique for the hypsometric integral. Journal of Geophysical Research: Earth Surface, 114(F2).

Pike, R. J., and Wilson, S. E. (1971). Elevation-relief ratio, hypsometric integral, and geomorphic area-altitude analysis. Geological Society of America Bulletin, 82(4), 1079-1084.

See Also

hypsometric_analysis, max_elevation_deviation, multiscale_std_dev_normals

logistic_regression

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a logistic regression analysis using multiple predictor rasters (inputs), or features, and training data (training). Logistic regression is a type of linear statistical classifier that in its basic form uses a logistic function to model a binary outcome variable, although the implementation used by this tool can handle multi-class dependent variables. This tool can be used to model the spatial distribution of class data, such as land-cover type, soil class, or vegetation type.

The training data take the form of an input vector Shapefile containing a set of points or polygons, for which the known class information is contained within a field (field) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space.

The tool splits the training data into two sets, one for training the model and one for testing the prediction. These test data are used to calculate the classification accuracy stats, as well as to estimate the variable importance. The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, the tool behaves stochastically, and will result in a different model each time it is run.

Note that the output image parameter (output) is optional. When unspecified, the tool will simply report the model accuracy statistics and variable importance, allowing the user to experiment with different parameter settings and input predictor raster combinations to optimize the model before applying it to model the outcome variable across the whole region defined by image data set.

The user may opt for feature scaling, which can be important when the ranges of the features are different, for example, if they are measured in different units. Without scaling, features with larger ranges will have greater influence in computing the distances between points. The tool offers three options for feature-scaling (scaling), including 'None', 'Normalize', and 'Standardize'. Normalization simply rescales each of the features onto a 0-1 range. This is a good option for most applications, but it is highly sensitive to outliers because it is determined by the range of the minimum and maximum values. Standardization rescales predictors using their means and standard deviations, transforming the data into z-scores. This is a better option than normalization when you know that the data contain outlier values; however, it does does assume that the feature data are somewhat normally distributed, or are at least symmetrical in distribution.

Because the logistic regression calculates distances in feature-space, like many other related algorithms, it suffers from the curse of dimensionality. Distances become less meaningful in high-dimensional space because the vastness of these spaces means that distances between points are less significant (more similar). As such, if the predictor list includes insignificant or highly correlated variables, it is advisable to exclude these features during the model-building phase, or to use a dimension reduction technique such as principal_component_analysis to transform the features into a smaller set of uncorrelated predictors.

Memory Usage

The peak memory usage of this tool is approximately 8 bytes per grid cell × # predictors.

See Also

svm_classification, random_forest_classification, knn_classification, principal_component_analysis

low_points_on_headwater_divides

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool locates low points, or passes, on the drainage divides between subbasins that are situated on headwater divides. A subbasin is the catchment draining to a link in a stream network. A headwater catchment is the portion of a subbasin that drains to the channel head. Only first-order streams contain channel heads and headwater catchments are sometimes referred to as zero-order basins. The lowest points along a zero-order catchment is likely to coincide with mountain passes in alpine environments.

The user must input a depressionless DEM (i.e. a DEM that has been pre-processed to remove all topographic depressions) and a raster stream network. The tool will work best if the raster stream network, generally derived by thresholding a flow-accumulation raster, is processed to remove shorter headwater streams. You can use the remove_short_streams tool remove shorter streams in the input raster. It is recommended to remove streams shorter than 2 or 3 grid cells in length. The algorithm proceeds by first deriving the D8 flow pointer from the input DEM. It then identifies all channel head cells in the input streams raster and the zero-order basins that drain to them. The stream network is then processed to assign a unique identifier to each segment, which is then used to extract subbasins. Lastly, zero-order basin edge cells are identified and the location of lowest grid cells for each pair of neighbouring basins is entered into the output vector file.

See Also

remove_short_streams

min_dist_classification

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a supervised minimum-distance classification using training site polygons (polys) and multi-spectral images (inputs). This classification method uses the mean vectors for each class and calculates the Euclidean distance from each unknown pixel to the class mean vector. Unclassed pixels are then assigned to the nearest class mean. A threshold distance (threshold), expressed in number of z-scores, may optionally be used and pixels whose multi-spectral distance is greater than this threshold will not be assigned a class in the output image (output). When a threshold distance is unspecified, all pixels will be assigned to a class.

Like all supervised classification methods, this technique relies heavily on proper selection of training data. Training sites are exemplar areas of known and representative land cover type. The algorithm determines the spectral signature of the pixels within each training area, and uses this information to define the mean vector of each class. It is preferable that training sites are based on either field-collected data or fine-resolution reference imagery. In selecting training sites, care should be taken to ensure that they cover the full range of variability within each class. Otherwise the classification accuracy will be impacted. If possible, multiple training sites should be selected for each class. It is also advisable to avoid areas near the edges of land-cover patches, where mixed pixels may impact the purity of training site reflectance values.

After selecting training sites, the reflectance values of each land-cover type can be assessed using the evaluate_training_sites tool. In particular, the distribution of reflectance values should ideally be non-overlapping in at least one band of the multi-spectral data set.

See Also

evaluate_training_sites, parallelepiped_classification

Function Signature

def min_dist_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, dist_threshold: float = float('inf')) -> Raster: ...

modify_lidar

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

The ModifyLidar tool can be used to alter the properties of points within a LiDAR point cloud. The user provides a statement (statement) containing one or more expressions, separated by semicolons (;). The expressions are evaluated for each point within the input LiDAR file (input). Expressions assign altered values to the properties of points in the output file (output), based on any mathematically defined expression that may include the various properties of individual points (e.g. coordinates, intensity, return attributes, etc) or some file-level properties (e.g. min/max coordinates). As a basic example, the following statement:

x = x + 1000.0

could be used to translate the point cloud 1000 x-units (note, the increment operator could be used as a simpler equivalent, x += 1000.0).

Note that if the user does not specify the optional input LiDAR file, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful for processing a large number of LiDAR files in batch mode. When this batch mode is applied, the output file names will be the same as the input file names but with a '_modified' suffix added to the end.

Expressions may contain any of the following point-level or file-level variables:

Variable NameDescriptionType
Point-level properties
xThe point x coordinatefloat
yThe point y coordinatefloat
zThe point z coordinatefloat
xyAn x-y coordinate tuple, (x, y)(float, float)
xyzAn x-y-z coordinate tuple, (x, y, z)(float, float, float)
intensityThe point intensity valueint
retThe point return numberint
nretThe point number of returnsint
is_onlyTrue if the point is an only return (i.e. ret == nret == 1), otherwise falseBoolean
is_multipleTrue if the point is a multiple return (i.e. nret > 1), otherwise falseBoolean
is_earlyTrue if the point is an early return (i.e. ret == 1), otherwise falseBoolean
is_intermediateTrue if the point is an intermediate return (i.e. ret > 1 && ret < nret), otherwise falseBoolean
is_lateTrue if the point is a late return (i.e. ret == nret), otherwise falseBoolean
is_firstTrue if the point is a first return (i.e. ret == 1 && nret > 1), otherwise falseBoolean
is_lastTrue if the point is a last return (i.e. ret == nret && nret > 1), otherwise falseBoolean
classThe class value in numeric form, e.g. 0 = Never classified, 1 = Unclassified, 2 = Ground, etc.int
is_noiseTrue if the point is classified noise (i.e. class == 7
is_syntheticTrue if the point is synthetic, otherwise falseBoolean
is_keypointTrue if the point is a keypoint, otherwise falseBoolean
is_withheldTrue if the point is withheld, otherwise falseBoolean
is_overlapTrue if the point is an overlap point, otherwise falseBoolean
scan_angleThe point scan angleint
scan_directionTrue if the scanner is moving from the left towards the right, otherwise falseBoolean
is_flightline_edgeTrue if the point is situated along the filightline edge, otherwise falseBoolean
user_dataThe point user dataint
point_source_idThe point source IDint
scanner_channelThe point scanner channelint
timeThe point GPS time, if it exists, otherwise 0float
rgbA red-green-blue tuple (r, g, b) if it exists, otherwise (0,0,0)(int, int, int)
nirThe point near infrared value, if it exists, otherwise 0int
pt_numThe point number within the input fileint
File-level properties (invariant)
n_ptsThe number of points within the fileint
min_xThe file minimum x valuefloat
mid_xThe file mid-point x valuefloat
max_xThe file maximum x valuefloat
min_yThe file minimum y valuefloat
mid_yThe file mid-point y valuefloat
max_yThe file maximum y valuefloat
min_zThe file minimum z valuefloat
mid_zThe file mid-point z valuefloat
max_zThe file maximum z valuefloat
x_scale_factorThe file x scale factorfloat
y_scale_factorThe file y scale factorfloat
z_scale_factorThe file z scale factorfloat
x_offsetThe file x offsetfloat
y_offsetThe file y offsetfloat
z_offsetThe file z offsetfloat

Most of the point-level properties above are modifiable, however some are not. The complete list of modifiable point attributes include, x, y, z, xy, xyz, intensity, ret, nret, class, user_data, point_source_id, scanner_channel, scan_angle, time, rgb, nir, is_synthetic, is_keypoint, is_withheld, and is_overlap. The immutable properties include is_only, is_multiple, is_early, is_intermediate, is_late, is_first, is_last, is_noise, and pt_num. Of the file-level properties, the modifiable properties include the x_scale_factor, y_scale_factor, z_scale_factor, x_offset, y_offset, and z_offset.

In addition to the point properties defined above, if the user applies the lidar_eigenvalue_features tool on the input LiDAR file, the modify_lidar tool will automatically read in the additional *.eigen file, which include the eigenvalue-based point neighbourhood measures, such as lambda1, lambda2, lambda3, linearity, planarity, sphericity, omnivariance, eigentropy, slope, and residual. See the lidar_eigenvalue_features documentation for details on each of these metrics describing the structure and distribution of points within the neighbourhood surrounding each point in the LiDAR file.

Expressions may use any of the standard mathematical operators, +, -, *, /, % (modulo), ^ (exponentiation), comparison operators, <, >, <=, >=, == (equality), != (inequality), and logical operators, && (Boolean AND), || (Boolean OR). Expressions must evaluate to an assignment operation, where the variable that is assigned to must be a modifiable point-level property (see table above). That is, expressions should take the form pt_variable = .... Other assignment operators are also possible (at least for numeric non-tuple properties), such as the increment (=+) operator (e.g. x += 1000.0) and the decrement (-=) operator (e.g. y -= 1000.0). Expressions may use a number of built-in mathematical functions, including:

Function NameDescriptionExample
ifPerforms an if(CONDITION, TRUE, FALSE) operation, return either the value of TRUE or FALSE depending on CONDITIONret = if(ret==0, 1, ret)
absReturns the absolute value of the argumentvalue = abs(x - mid_x)
minReturns the minimum of the argumentsvalue = min(x, y, z)
maxReturns the maximum of the argumentsvalue = max(x, y, z)
floorReturns the largest integer less than or equal to a numberx = floor(x)
roundReturns the nearest integer to a number. Rounds half-way cases away from 0.0x = round(x)
ceilReturns the smallest integer greater than or equal to a numberx = ceil(x)
clampForces a value to fall within a specified range, defined by a minimum and maximumz = clamp(min_z+10.0, z, max_z-20.0)
intReturns the integer equivalent of a numberintensity = int(z)
floatReturns the float equivalent of a numberz = float(intensity)
to_radiansConverts a number in degrees to radiansval = to_radians(scan_angle)
to_degreesConverts a number in radians to degreesscan_angle = int(to_degrees(val))
distReturns the distance between two points defined by two n-length tuplesd = dist(xy, (mid_x, mid_y)) or d = dist(xyz, (mid_x, mid_y, mid_z))
rotate_ptRotates an x-y point by a certain angle, in degreesxy = rotate_pt(xy, 45.0) or orig_pt = (1000.0, 1000.0); xy = rotate_pt(xy, 45.0, orig_pt)
math::lnReturns the natural logarithm of the numberz = math::ln(z)
math::logReturns the logarithm of the number with respect to an arbitrary basez = math::log(z, 10)
math::log2Returns the base 2 logarithm of the numberz = math::log2(z)
math::log10Returns the base 10 logarithm of the numberz = math::log10(z)
math::expReturns e^(number), (the exponential function)z = math::exp(z)
math::powRaises a number to the power of the other numberz = math::pow(z, 2.0)
math::sqrtReturns the square root of a number. Returns NaN for a negative numberz = math::sqrt(z, 2.0)
math::cosComputes the cosine of a number (in radians)z = math::cos(to_radians(z))
math::sinComputes the sine of a number (in radians)z = math::sin(to_radians(z))
math::tanComputes the tangent of a number (in radians)z = math::tan(to_radians(z))
math::acosComputes the arccosine of a number. The return value is in radians in the range [0, pi] or NaN if the number is outside the range [-1, 1]z = math::acos(z)
math::asinComputes the arcsine of a number. The return value is in radians in the range [0, pi] or NaN if the number is outside the range [-1, 1]z = math::asin(z)
math::atanComputes the arctangent of a number. The return value is in radians in the range [0, pi] or NaN if the number is outside the range [-1, 1]z = math::atan(z)
randReturns a random value between 0 and 1, with an optional seed valuergb = (int(255.0 * rand()), int(255.0 * rand()), int(255.0 * rand()))
helmert_transformationPerforms a Helmert transformation on a point using a 7-parameter transformxyz = helmert_transformation(xyz, −446.448, 125.157, −542.06, 20.4894, −0.1502, −0.247, −0.8421 )

The hyperbolic trigonometric functions are also available for use in expression building, as is math::atan2 and the mathematical constants pi and e.

You may use if operations within statements to implement a conditional modification of point properties. For example, the following expression demonstrates how you could modify a point's RGB colour based on its classification, assign ground points (class 2) in the output file a green colour:

rgb = if(class==2, (0,255,0), rgb)

To colour all points within 50 m of the tile mid-point red and all other points blue:

rgb = if(dist(xy, (mid_x, mid_y))<50.0, (255,0,0), (0,0,255))

if operations may also be nested to create more complex compound conditional point modification. For example, in the following statement, we assign first-return points red (255,0,0) and last-return points green (0,255,0) colours and white (255,255,255) to all other points (intermediate-returns and only-returns):

rgb = if(is_first, (255,0,0), if(is_last, (0,255,0), (255,255,255)))

Here we use an if expression to re-classify points above an elevation of 1000.0 as high noise (class 18):

class = if(z>1000.0, 18, class)

Expressions may be strung together within statements using semicolons (;), with each expression being evaluated individually. When this is the case, at least one of the expressions must assign a value to one of the variant point properties (see table above). The following statement demonstrates multi-expression statements, in this case to swap the x and y coordinates in a LiDAR file:

new_var = x; x = y; y = new_var

The rand function, used with the seeding option, can be useful when assigning colours to points based on common point properties. For example, to assign a point a random RGB colour based on its point_source_id (Note, for many point clouds, this operation will assign each flightline a unique colour; if flightline information is not stored in the file's point_source_id attribute, one could use the recover_flightline_info tool to calculate this data.):

rgb=(int(255 * rand(point_source_id)), int(255 * rand(point_source_id+1)), int(255 * rand(point_source_id+2)))

This expression-based approach to modifying point properties provides a great deal of flexibility and power to the processing of LiDAR point cloud data sets.

See Also

filter_lidar, sort_lidar, lidar_eigenvalue_features

multiscale_curvatures

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates several multiscale curvatures and curvature-based indices from an input DEM (dem). There are 18 curvature types (curv_type) available, including: accumulation curvature, curvedness, difference curvature, Gaussian curvature, generating function, horizontal excess curvature, maximal curvature, mean curvature, minimal curvature, plan curvature, profile curvature, ring curvature, rotor, shape index, tangential curvature, total curvature, unsphericity, and vertical excess curvature. Each of these curvatures can be measured in non-multiscale fashion using the corresponding tools available in either the WhiteboxTools open-core or the Whitebox extension.

Like many of the multi-scale land-surface parameter tools available in Whitebox, this tool can be run in two different modes: it can either be used to measure curvature at a single specific scale or to generate a curvature scale mosaic. To understand the difference between these two modes, we must first understand how curvatures are measured and how the non-multiscale curvature tools (e.g. profile_curvature) work. Curvatures are generally measured by fitting a mathematically defined surface to the elevation values within the local neighbourhood surrounding each grid cell in a DEM. The Whitebox curvature tools use the algorithms described Florinsky (2016), which use the 25 elevations within a 5 x 5 local neighbouhood for projected DEMs, and the nine elevations within a 3 x 3 neighbourhood for DEMs in geographic coordinate systems. This is what determines the scale at which these land-surface parameters are calculated. Because they are calculated using small local neighbourhoods (kernels), then these algorithms are heavily impacted by micro-topographic roughness and DEM noise. For example, in a fine-resolution DEM containing a great deal of micro-topographic roughness, the measured curvature value will be dominated by topographic variation at the scale of the roughness rather than the hillslopes on which that roughness is superimposed. This mis-matched scaling can be a problem in many applications, e.g. in landform classification and slope failure modelling applications.

Using the multiscale_curvatures tool, the user can specify a certain desired scale, larger than that defined by the grid resolution and kernel size, over which a curvature should be characterized. The tool will then use a fast Gaussian scale-space method to remove the topographic variation in the DEM at scales less than the desired scale, and will then characterize the curvature using the usual method based on this scaled DEM. To measure curvature at a single non-local scale, the user must specify a minimum search neighbourhood radius in grid cells (min_scale) greater than 0.0. Note that a minimum search neighbourhood of 0.0 will replicate the non-multiscale equivalent curvature tool and any min_scale value > 0.0 will apply the Gassian scale space method to eliminate topographic variation less than the scale of the minimum search neighbourhood. The base step size (step), number of steps (num_steps), and step nonlinearity (step_nonlinearity) parameters should all be left to their default values of 1 in this case. The output curvature raster will be written to the output magnitude file (out_mag). The following animation shows several multiscale curvature rasters (tangential curvature) measured from a DEM across a range of spatial scales.

Alternatively, one can use this tool to create a curvature scale mosaic. In this case, the user specifies a range of spatial scales (i.e., a scale space) over which to measure curvature. The curvature scale-space is densely sampled and each grid cell is assigned the maximum absolute curvature value (for the specified curvature type) across the scale space. In this scale-mosaic mode, the user must also specify the output scale file name (out_scale), which is an output raster that, for each grid cell, specifies the scale at which the maximum absolute curvature was identified. The following is an example of a scale mosaic of unsphericity for an area in Pole Canyon, Utah (min_scale=1.0, step=1, num_steps=50, step_nonlinearity=1.0).

Scale mosaics are useful when modelling spatial distributions of land-surface parameters, like curvatures, in complex and heterogeneous landscapes that contain an abundance of topographic variation (micro-topography, landforms, etc.) at widely varying spatial scales, often associated with different geomorphic processes. Notice how in the image above, relatively strong curvature values are being characterized for both the landforms associated with the smaller-scale mass-movement processes as well as the broader-scale fluvial erosion (i.e. valley incision and hillslopes). It would be difficult, or impossible, to achieve this effect using a single, uniform scale. Each location in a land-surface parameter scale mosaic represents the parameter measured at a characteristic scale, given the unique topography of the site and surroundings.

The properties of the sampled scale space are determined using the min_scale, step, num_steps (greater than 1), and step_nonlinearity parameters. Experience with multiscale curvature scales spaces has shown that they are more highly variable at shorter spatial scales and change more gradually at broader scales. Therefore, a nonlinear scale sampling interval is used by this tool to ensure that the scale sampling density is higher for short scale ranges and coarser at longer tested scales, such that:

ri = rL + [step × (i - rL)]p

Where ri is the filter radius for step i and p is the nonlinear scaling factor (step_nonlinearity) and a step size (step) of step.

In scale-mosaic mode, the user must also decide whether or not to standardize the curvature values (standardize). When this parameter is used, the algorithm will convert each curvature raster associated with each sampled region of scale-space to z-scores (i.e. differenced from the raster-wide mean and divided by the raster-wide standard deviation). It it usually the case that curvature values measured at broader spatial scales will on the whole become less strongly valued. Because the scale mosaic algorithm used in this tool assigns each grid cell the maximum absolute curvature observed within sampled scale-space, this implies that the curvature values associated with more local-scale ranges are more likely to be selected for the final scale-mosaic raster. By standardizing each scaled curvature raster, there is greater opportunity for the final scale-mosaic to represent broader scale topographic variation. Whether or not this is appropriate will depend on the application. However, it is important to stress that the sampled scale-space need not span the full range of possible scales, from the finest scale determined by the grid resolution up to the broadest scale possible, determined by the spatial extent of the input DEM. Often, a better approach is to use this tool to create multiple scale mosaics spanning this range, thereby capturing variation within broadly defined scale ranges. For example, one could create a local-scale, meso-scale, and broad-scale curvature scale mosaics, each of which would capture topographic variation and landforms that are present in the landscape and reflective of processing operating at vastly different spatial scales. When this approach is used, it may not be necessary to standardize each scaled curvature raster, since the gradual decline in curvature values as scales increase is less pronounced within each of these broad scale ranges than across the entirety of possible scale-space. Again, however, this will depend on the application and on the characteristics of the landscape at study.

Raw curvedness values are often challenging to visualize given their range and magnitude, and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

See Also

gaussian_scale_space, accumulation_curvature, curvedness, difference_curvature, gaussian_curvature, generating_function, horizontal_excess_curvature, maximal_curvature, mean_curvature, minimal_curvature, plan_curvature, profile_curvature, ring_curvature, rotor, shape_index, tangential_curvature, total_curvature, unsphericity, vertical_excess_curvature

Function Signature

def multiscale_curvatures(self, dem: Raster, curv_type: str = 'profile', min_scale: int = 4, step_size: int = 1, num_steps: int = 10, step_nonlinearity: float = 1.0, log_transform: bool = True, standardize: bool = False) -> Tuple[Raster, Raster]: ...

nibble

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

The nibble function assigns areas within an input class map raster that are coincident with a mask the value of their nearest neighbour. Nibble is typically used to replace erroneous sections in a class map. Cells in the mask raster that are either NoData or zero values will be replaced in the input image with their nearest non-masked value. All input raster values in non-mask areas will be unmodified.

There are two input parameters that are related to how NoData cells in the input raster are handled during the nibble operation. The use_nodata Boolean determines whether or not input NoData cells, not contained within masked areas, are treated as ordinary values during the nibble. It is False by default, meaning that NoData cells in the input raster do not extend into nibbled areas. When the nibble_nodata parameter is True, any NoData cells in the input raster that are within the masked area are also NoData in the output raster; when nibble_nodata is False these cells will be nibbled.

See Also:

sieve

Function Signature

def nibble(self, input_raster: Raster, mask: Raster, use_nodata: bool = False, nibble_nodata: bool = True) -> Raster:

openness

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the Yokoyama et al. (2002) topographic openness index from an input DEM (input). Openness has two viewer perspectives, which correspond with positive and negative openness outputs (pos_output and neg_output). Positive values, expressing openness above the surface, are high for convex forms, whereas negative values describe this attribute below the surface and are high for concave forms. Openness is an angular value that is an average of the horizon angle in the eight cardinal directions to a maximum search distance (dist), measured in grid cells. Openness rasters are best visualized using a greyscale palette.

Positive Openness:

Negative Openness:

References

Yokoyama, R., Shirasawa, M., & Pike, R. J. (2002). Visualizing topography by openness: a new application of image processing to digital elevation models. Photogrammetric engineering and remote sensing, 68(3), 257-266.

See Also

viewshed, horizon_angle, time_in_daylight, hillshade

parallelepiped_classification

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a supervised parallelepiped classification using training site polygons (polys) and multi-spectral images (inputs). This classification method uses the minimum and maximum reflectance values for each class within the training data to characterize a set of parallelepipeds, i.e. multi-dimensional geometric shapes. The algorithm then assigns each unknown pixel in the image data set to the first class for which the pixel's spectral vector is contained within the corresponding class parallelepiped. Pixels with spectral vectors that are not contained within any class parallelepiped will not be assigned a class in the output image.

Like all supervised classification methods, this technique relies heavily on proper selection of training data. Training sites are exemplar areas of known and representative land cover type. The algorithm determines the spectral signature of the pixels within each training area, and uses this information to define the mean vector of each class. It is preferable that training sites are based on either field-collected data or fine-resolution reference imagery. In selecting training sites, care should be taken to ensure that they cover the full range of variability within each class. Otherwise the classification accuracy will be impacted. If possible, multiple training sites should be selected for each class. It is also advisable to avoid areas near the edges of land-cover patches, where mixed pixels may impact the purity of training site reflectance values.

After selecting training sites, the reflectance values of each land-cover type can be assessed using the evaluate_training_sites tool. In particular, the distribution of reflectance values should ideally be non-overlapping in at least one band of the multi-spectral data set.

See Also

evaluate_training_sites, min_dist_classification

Function Signature

def parallelepiped_classification(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str) -> Raster: ...

phi_coefficient

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a binary classification accuracy assessment, using the Phi coefficient. The Phi coefficient is a measure of association for two binary variables. Introduced by Karl Pearson, this measure is similar to the Pearson correlation coefficient in its interpretation and is related to the chi-squared statistic for a 2×2 contingency table. The user must specify the names of two input images (input1 and input2), containing categorical data.

piecewise_contrast_stretch

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to perform a piecewise contrast stretch on an input image (input). The input image can be either a single-band image or a colour composite, in which case the contrast stretch will be performed on the intensity values of the hue-saturation-intensity (HSI) transform of the colour image. The user must also specify the name of the output image (output) and the break-points that define the piecewise function used to transfer brightness values from the input to the output image. The break-point values are specified as a string parameter (function), with each break-point taking the form of (input value, output proportion); (input value, output proportion); (input value, output proportion), etc. Piecewise functions can have as many break-points as desired, and each break-point should be separated by a semicolon (;). The input values are specifies as brightness values in the same units as the input image (unless it is an input colour composite, in which case the intensity values range from 0 to 1). The output function must be specified as a proportion (from 0 to 1) of the output value range, which is specified by the number of output greytones (greytones). The greytones parameter is ignored if the input image is a colour composite. Note that there is no need to specify the initial break-point to the piecewise function, as (input min value; 0.0) will be inserted automatically. Similarly, an upper bound of the piecewise function of (input max value; 1.0) will also be inserted automatically.

Generally you want to set breakpoints by examining the image histogram. Typically it is desirable to map large unpopulated ranges of input brightness values in the input image onto relatively narrow ranges of the output brightness values (i.e. a shallow sloped segment of the piecewise function), and areas of the histogram that are well populated with pixels in the input image with a larger range of brightness values in the output image (i.e. a steeper slope segment). This will have the effect of reducing the number of tones used to display the under-populated tails of the distribution and spreading out the well-populated regions of the histogram, thereby improving the overall contrast and the visual interpretability of the output image. The flexibility of the piecewise contrast stretch can often provide a very suitable means of significantly improving image quality.

See Also

raster_histogram, gaussian_contrast_stretch, min_max_contrast_stretch, standard_deviation_contrast_stretch

prune_vector_streams

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to prune the smallest branches of a vector stream network based on a threshold in link magnitude. The function automatically calculates the Shreve magnitude of each link in the input streams vector. This operation requires an input digital elevation model (DEM). The function is also capable of calculating the link magnitude from stream networks that have some minor topological errors (e.g., line overshoots or undershoots). This requires the input of a snap_distance parameter (default = 0.0).

See Also

vector_stream_network_analysis, repair_stream_vector_topology

random_forest_classification_fit

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a supervised random forest (RF) classification using multiple predictor rasters (inputs), or features, and training data (training). It can be used to model the spatial distribution of class data, such as land-cover type, soil class, or vegetation type. The training data take the form of an input vector Shapefile containing a set of points or polygons, for which the known class information is contained within a field (class_field_name) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space. Random forest is an ensemble learning method that works by creating a large number (n_trees) of decision trees and using a majority vote to determine estimated class values. Individual trees are created using a random sub-set of predictors. This ensemble approach overcomes the tendency of individual decision trees to overfit the training data. As such, the RF method is a widely and successfully applied machine-learning method in many domains.

Note that this function is part of a set of two tools, including random_forest_classification_fit and random_forest_classification_prdict. The random_forest_classificaiton_fit tool should be used first to create the RF model and the random_forest_classification_predict can then be used to apply that model for prediction. The output of the fit tool is a byte array that is a binary representation of the RF model. This model can then be used as the input to the predict tool, along with a list of input raster predictors, which must be in the same order as those used in the fit tool. The output of the predict tool is a classified raster. The reason that the RF workflow is split in this way is that often it is the case that you need to experiment with various input predictor sets and parameter values to create an adequate model. There is no need to generate an output classified raster during this experimentation stage, and because prediction can often be the slowest part of the RF modelling process, it is generally only performed after the final model has been identified. The binary representation of the RF-based model can be serialized (i.e., saved to a file) and then later read back into memory to serve as the input for the prediction step of the workflow (see code example below).

Also note that this tool is for RF-based classification. There is a similar set of fit and *predict tools available for performing RF-based regression, including random_forest_regression_fit and random_forest_regression_predict. These tools are more appropriately applied to the modelling of continuous data, rather than categorical data.

The user must specify the splitting criteria (split_criterion) used in training the decision trees. Options for this parameter include 'Gini', 'Entropy', and 'ClassificationError'. The model can also be adjusted based on each of the number of trees (n_trees), the minimum number of samples required to be at a leaf node (min_samples_leaf), and the minimum number of samples required to split an internal node (min_samples_split) parameters.

The tool splits the training data into two sets, one for training the classifier and one for testing the model. These test data are used to calculate the overall accuracy and Cohen's kappa index of agreement, as well as to estimate the variable importance. The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, and the random selection of features used in decision tree creation, the tool is inherently stochastic, and will result in a different model each time it is run.

Like all supervised classification methods, this technique relies heavily on proper selection of training data. Training sites are exemplar areas/points of known and representative class value (e.g. land cover type). The algorithm determines the feature signatures of the pixels within each training area. In selecting training sites, care should be taken to ensure that they cover the full range of variability within each class. Otherwise the classification accuracy will be impacted. If possible, multiple training sites should be selected for each class. It is also advisable to avoid areas near the edges of class objects (e.g. land-cover patches), where mixed pixels may impact the purity of training site values.

After selecting training sites, the feature value distributions of each class type can be assessed using the evaluate_training_sites tool. In particular, the distribution of class values should ideally be non-overlapping in at least one feature dimension.

RF, like decision trees, does not require feature scaling. That is, unlike the k-NN algorithm and other methods that are based on the calculation of distances in multi-dimensional space, there is no need to rescale the predictors onto a common scale prior to RF analysis. Because individual trees do not use the full set of predictors, RF is also more robust against the curse of dimensionality than many other machine learning methods. Nonetheless, there is still debate about whether or not it is advisable to use a large number of predictors with RF analysis and it may be better to exclude predictors that are highly correlated with others, or that do not contribute significantly to the model during the model-building phase. A dimension reduction technique such as principal_component_analysis can be used to transform the features into a smaller set of uncorrelated predictors.

Example Code

import os
from whitebox_workflows import WbEnvironment

license_id = 'floating-license-id'
wbe = WbEnvironment(license_id)

try:
    wbe.verbose = True
    wbe.working_directory = "/path/to/data"
   
    # Read the input raster files into memory
    images = wbe.read_rasters(
        'LC09_L1TP_018030_20220614_20220615_02_T1_B2.TIF',
        'LC09_L1TP_018030_20220614_20220615_02_T1_B3.TIF',
        'LC09_L1TP_018030_20220614_20220615_02_T1_B4.TIF',
        'LC09_L1TP_018030_20220614_20220615_02_T1_B5.TIF'
    )

    # Read the input training polygons into memory
    training_data = wbe.read_vector('training_data.shp')

    # Train the model
    model = wbe.random_forest_classification_fit(
        images, 
        training_data, 
        class_field_name = 'CLASS', 
        split_criterion = "Gini", 
        n_trees = 50,  
        min_samples_leaf = 1, 
        min_samples_split = 2, 
        test_proportion = 0.2
    )

    ### Example of how to serialize the model, i.e., save the model, which is just binary data
    print('Saving the model to file...')
    file_path = os.path.join(wbe.working_directory, "rf_model.bin")
    with open(file_path, "wb") as file:
        file.write(bytearray(model))

    ### Example of how to deserialize the model, i.e. read the model
    model = []
    with open(file_path, mode='rb') as file:
        model = list(file.read())

    # Use the model to predict
    rf_class_image = wbe.random_forest_classification_predict(images, model)

    wbe.write_raster(rf_class_image, 'rf_classification.tif', compress=True)
   
    print('All done!')
except Exception as e:
    print("The error raised is: ", e)
finally:
    wbe.check_in_license(license_id)

See Also

random_forest_classification_predict, random_forest_regression_fit, random_forest_regression_predict, knn_classification, svm_classification, parallelepiped_classification, evaluate_training_sites

Function Signature

def random_forest_classification_fit(self, input_rasters: List[Raster], training_data: Vector, class_field_name: str, split_criterion: str = "gini", n_trees: int = 500, min_samples_leaf: int = 1, min_samples_split: int = 2, test_proportion: float = 0.2) -> List[int]: ...

random_forest_classification_predict

Note this tool is part of a WhiteboxTools extension product. Please visit Whitebox Geospatial Inc. for information about purchasing a license activation key (https://www.whiteboxgeo.com/extension-pricing/).

This tool applies a pre-built random forest (RF) classification model trained using multiple predictor rasters (input_rasters), or features, and training data to predict a spatial distribution. This function is part of a set of two tools, including random_forest_classification_fit and random_forest_classification_prdict. The random_forest_classification_fit tool should be used first to create the RF model and the random_forest_classification_predict can then be used to apply that model for prediction. The output of the fit tool is a byte array that is a binary representation of the RF model. This model can then be used as the input to the predict tool, along with a list of input raster predictors, which must be in the same order as those used in the fit tool (see below). The output of the predict tool is a classified raster. The reason that the RF workflow is split in this way is that often it is the case that you need to experiment with various input predictor sets and parameter values to create an adequate model. There is no need to generate an output classified raster during this experimentation stage, and because prediction can often be the slowest part of the RF modelling process, it is generally only performed after the final model has been identified. The binary representation of the RF-based model can be serialized (i.e., saved to a file) and then later read back into memory to serve as the input for the prediction step of the workflow (see code example below).

Note: it is very important that the order of feature rasters is the same for both fitting the model and using the model for prediction. It is possible to use a model fitted to one data set to make preditions for another data set, however, the set of feature reasters specified to the prediction tool must be input in the same sequence used for building the model. For example, one may train a RF classifer on one set of multi-spectral satellite imagery and then apply that model to classify a different imagery scene, but the image band sequence must be the same for the Fit/Predict tools otherwise inaccurate predictions will result.

Example Code

import os
from whitebox_workflows import WbEnvironment

license_id = 'floating-license-id'
wbe = WbEnvironment(license_id)

try:
    wbe.verbose = True
    wbe.working_directory = "/path/to/data"
   
    # Read the input raster files into memory
    images = wbe.read_rasters(
        'LC09_L1TP_018030_20220614_20220615_02_T1_B2.TIF',
        'LC09_L1TP_018030_20220614_20220615_02_T1_B3.TIF',
        'LC09_L1TP_018030_20220614_20220615_02_T1_B4.TIF',
        'LC09_L1TP_018030_20220614_20220615_02_T1_B5.TIF'
    )

    # Read the input training polygons into memory
    training_data = wbe.read_vector('training_data.shp')

    # Train the model
    model = wbe.random_forest_classification_fit(
        images, 
        training_data, 
        class_field_name = 'CLASS', 
        split_criterion = "Gini", 
        n_trees = 50,  
        min_samples_leaf = 1, 
        min_samples_split = 2, 
        test_proportion = 0.2
    )

    ### Example of how to serialize the model, i.e., save the model, which is just binary data
    print('Saving the model to file...')
    file_path = os.path.join(wbe.working_directory, "rf_model.bin")
    with open(file_path, "wb") as file:
        file.write(bytearray(model))

    ### Example of how to deserialize the model, i.e. read the model
    model = []
    with open(file_path, mode='rb') as file:
        model = list(file.read())

    # Use the model to predict
    rf_class_image = wbe.random_forest_classification_predict(images, model)

    wbe.write_raster(rf_class_image, 'rf_classification.tif', compress=True)
   
    print('All done!')
except Exception as e:
    print("The error raised is: ", e)
finally:
    wbe.check_in_license(license_id)

See Also

random_forest_classification_fit, random_forest_regression_fit, random_forest_regression_predict, knn_classification, svm_classification, parallelepiped_classification, evaluate_training_sites

Function Signature

def random_forest_classification_predict(self, input_rasters: List[Raster], model_bytes: List[int]) -> Raster: ...

random_forest_regression_fit

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This function performs a supervised random forest (RF) regression analysis using multiple predictor rasters (input_rasters), or features, and training data (training_data). The training data take the form of an input vector Shapefile containing a set of points, for which the known outcome information is contained within a field (field_name) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space.

Note that this function is part of a set of two tools, including random_forest_regression_fit and random_forest_regression_prdict. The random_forest_classificaiton_fit tool should be used first to create the RF model and the random_forest_regression_predict can then be used to apply that model for prediction. The output of the fit tool is a byte array that is a binary representation of the RF model. This model can then be used as the input to the predict tool, along with a list of input raster predictors, which must be in the same order as those used in the fit tool. The output of the predict tool is a continous raster. The reason that the RF workflow is split in this way is that often it is the case that you need to experiment with various input predictor sets and parameter values to create an adequate model. There is no need to generate an output raster during this experimentation stage. Because prediction can often be the slowest part of the RF modelling process, it is generally only performed after the final model has been identified. The binary representation of the RF-based model can be serialized (i.e., saved to a file) and then later read back into memory to serve as the input for the prediction step of the workflow (see code example below).

Also note that this tool is for RF-based regression analysis. There is a similar set of fit and *predict tools available for performing RF-based classification, including random_forest_classification_fit and random_forest_classification_predict. These tools are more appropriately applied to the modelling of categorical data, rather than continuous data.

Note: it is very important that the order of feature rasters is the same for both fitting the model and using the model for prediction. It is possible to use a model fitted to one data set to make preditions for another data set, however, the set of feature reasters specified to the prediction tool must be input in the same sequence used for building the model. For example, one may train a RF regressor on one set of land-surface parameters and then apply that model to predict the spatial distribution of a soil property on a land-surface parameter stack derived for a different landscape, but the image band sequence must be the same for the Fit/Predict tools otherwise inaccurate predictions will result.

Random forest is an ensemble learning method that works by creating a large number (n_trees) of decision trees and using an averaging of each tree to determine estimated outcome values. Individual trees are created using a random sub-set of predictors. This ensemble approach overcomes the tendency of individual decision trees to overfit the training data. As such, the RF method is a widely and successfully applied machine-learning method in many domains.

Users must specify the number of trees (n_trees), the minimum number of samples required to be at a leaf node (min_samples_leaf), and the minimum number of samples required to split an internal node (min_samples_split) parameters, which determine the characteristics of the resulting model.

The function splits the training data into two sets, one for training the model and one for testing the prediction. These test data are used to calculate the regression accuracy statistics, as well as to estimate the variable importance. The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, as well as the randomness involved in establishing the individual decision trees, the tool in inherently stochastic, and will result in a different model each time it is run.

RF, like decision trees, does not require feature scaling. That is, unlike the k-NN algorithm and other methods that are based on the calculation of distances in multi-dimensional space, there is no need to rescale the predictors onto a common scale prior to RF analysis. Because individual trees do not use the full set of predictors, RF is also more robust against the curse of dimensionality than many other machine learning methods. Nonetheless, there is still debate about whether or not it is advisable to use a large number of predictors with RF analysis and it may be better to exclude predictors that are highly correlated with others, or that do not contribute significantly to the model during the model-building phase. A dimension reduction technique such as principal_component_analysis can be used to transform the features into a smaller set of uncorrelated predictors.

For a video tutorial on how to use the RandomForestRegression tool, see this YouTube video.

Code Example

import os
from whitebox_workflows import WbEnvironment

license_id = 'floating-license-id'
wbe = WbEnvironment(license_id)

try:
    wbe.verbose = True
    wbe.working_directory = "/path/to/data"

    # Read the input raster files into memory
    images = wbe.read_rasters(
        'DEV.tif',
        'profile_curv.tif',
        'tan_curv.tif',
        'slope.tif'
    )

    # Read the input training polygons into memory
    training_data = wbe.read_vector('Ottawa_soils_data.shp')

    # Train the model
    model = wbe.random_forest_regression_fit(
        images, 
        training_data, 
        field_name = 'Sand',
        n_trees = 50,  
        min_samples_leaf = 1, 
        min_samples_split = 2, 
        test_proportion = 0.2
    )

    ### Example of how to serialize the model, i.e., save the model, which is just binary data
    print('Saving the model to file...')
    file_path = os.path.join(wbe.working_directory, "rf_model.bin")
    with open(file_path, "wb") as file:
        file.write(bytearray(model))

    ### Example of how to deserialize the model, i.e. read the model
    model = []
    with open(file_path, mode='rb') as file:
        model = list(file.read())

    # Use the model to predict
    rf_image = wbe.random_forest_regression_predict(images, model)

    wbe.write_raster(rf_image, 'rf_regression.tif', compress=True)

    print('All done!')
except Exception as e:
    print("The error raised is: ", e)
finally:
    wbe.check_in_license(license_id)

See Also

random_forest_regression_predict, random_forest_classification_fit, random_forest_classification_predict, knn_classification, svm_classification, parallelepiped_classification, evaluate_training_sites

Function Signature

def random_forest_regression_fit(self, input_rasters: List[Raster], training_data: Vector, field_name: str, n_trees: int = 500, min_samples_leaf: int = 1, min_samples_split: int = 2, test_proportion: float = 0.2) -> List[int]: ...

random_forest_regression_predict

Note this tool is part of a WhiteboxTools extension product. Please visit Whitebox Geospatial Inc. for information about purchasing a license activation key (https://www.whiteboxgeo.com/extension-pricing/).

This tool applies a pre-built random forest (RF) regression model trained using multiple predictor rasters, or features (input_rasters), and training data to predict a spatial distribution. This function is part of a set of two tools, including random_forest_regression_fit and random_forest_regression_prdict. The random_forest_regression_fit function should be used first to create the RF model and the random_forest_regression_predict can then be used to apply that model for prediction. The output of the fit tool is a byte array that is a binary representation of the RF model. This model can then be used as the input to the predict tool, along with a list of input raster predictors, which must be in the same order as those used in the fit tool (see below). The output of the predict tool is a raster. The reason that the RF workflow is split in this way is that often it is the case that you need to experiment with various input predictor sets and parameter values to create an adequate model. There is no need to generate an output classified raster during this experimentation stage, and because prediction can often be the slowest part of the RF modelling process, it is generally only performed after the final model has been identified. The binary representation of the RF-based model can be serialized (i.e., saved to a file) and then later read back into memory to serve as the input for the prediction step of the workflow (see code example below).

Note: it is very important that the order of feature rasters is the same for both fitting the model and using the model for prediction. It is possible to use a model fitted to one data set to make preditions for another data set, however, the set of feature reasters specified to the prediction tool must be input in the same sequence used for building the model. For example, one may train a RF classifer on one set of multi-spectral satellite imagery and then apply that model to classify a different imagery scene, but the image band sequence must be the same for the Fit/Predict tools otherwise inaccurate predictions will result.

Code Example

import os
from whitebox_workflows import WbEnvironment

license_id = 'floating-license-id'
wbe = WbEnvironment(license_id)

try:
    wbe.verbose = True
    wbe.working_directory = "/path/to/data"

    # Read the input raster files into memory
    images = wbe.read_rasters(
        'DEV.tif',
        'profile_curv.tif',
        'tan_curv.tif',
        'slope.tif'
    )

    # Read the input training polygons into memory
    training_data = wbe.read_vector('Ottawa_soils_data.shp')

    # Train the model
    model = wbe.random_forest_regression_fit(
        images, 
        training_data, 
        field_name = 'Sand',
        n_trees = 50,  
        min_samples_leaf = 1, 
        min_samples_split = 2, 
        test_proportion = 0.2
    )

    ### Example of how to serialize the model, i.e., save the model, which is just binary data
    print('Saving the model to file...')
    file_path = os.path.join(wbe.working_directory, "rf_model.bin")
    with open(file_path, "wb") as file:
        file.write(bytearray(model))

    ### Example of how to deserialize the model, i.e. read the model
    model = []
    with open(file_path, mode='rb') as file:
        model = list(file.read())

    # Use the model to predict
    rf_image = wbe.random_forest_regression_predict(images, model)

    wbe.write_raster(rf_image, 'rf_regression.tif', compress=True)

    print('All done!')
except Exception as e:
    print("The error raised is: ", e)
finally:
    wbe.check_in_license(license_id)

See Also

random_forest_regression_fit, random_forest_classification_fit, random_forest_classification_predict, knn_classification, svm_classification, parallelepiped_classification, evaluate_training_sites

Function Signature

def random_forest_regression_predict(self, input_rasters: List[Raster], model_bytes: List[int]) -> Raster: ...

reconcile_multiple_headers

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to adjust the crop yield values for data sets collected with multiple headers or combines. When this situation occurs, the spatial pattern of in-field yield can be dominated by the impact of any miscalibration of the equipment among the individual headers. For example, notice how the areas collected by certain equipment (specified by the various random colours) in the leftmost panel (A) of the image below correspond with anomlously low or high yield values in the original yield map (middle panel, B). The goal of this tool is to calculate adjustment values to offset all of the yield data associated with each region in order to minimize the relative disparity among the various regions (rightmost panel, C).

The data collected by a single header defines a region, which is specified by the region field in the attribute table of the input vector file (input). The algorithm works by first locking the data associated the most extensive region. All non-locked points are visited and neighbouring points within a specified radius (radius) are retrieved. The difference between the average of yield values (yield_field) within the same region as the non-locked point and the average of locked-point yield values is calculated. After visiting all non-locked points, the overall average difference value is calculated for each non-locked region that shares an edge with the locked region. This overall average difference value is then used to offset all of the yield values contained within each neighbouring region. Each adjusted region is then locked and this whole process is iterated until eventually every region has had adjusted and locked. The adjusted yield values are then saved in the output file's (output) attribute table as a new field named ADJ_YIELD. The tool will also output a report that shows the offsets applied to each region to calculate the adjusted yield values.

The user may optionally specify minimum and maximum allowable yield values (min_yield and max_yield). Any points with yield values outside these bounds will not be included in the point neighbourhood analysis for calculating region adjustments and will also not be included in the output. The default values for this minimum and maximum yield values are the smallest non-zero value and positive infinity respectively. Additionally, the user may optionally specify a mean overall yield tonnage (mean_tonnage) value. If specified, the output yield values will have one final adjustment to ensure that the overall mean yield value equals this parameter, which should also be between the minimum and maximum values, if specified. This parameter can be set by dividing the actual measured tonnage taken off the field by the field area.

This tool can be used as a pre-processing step prior to applying the yield_filter tool for fields collected with multiple headers. Note that some experimentation with the radius size may be necessary to achieve optimal results and that this parameter should not be less than the spacing between passes, but may be substantially larger. Also, difficulties may be encountered when regions are composed of multiple separated areas that are joined by a path along the edge of the field. This is particularly problemmatic when there exists a strong spatial trend, in the form of a yield graidient, within the field. In such cases, it may be necessary to remove edge points from the data set using the remove_field_edge_points tool.

See Also

yield_filter, remove_field_edge_points, yield_map, recreate_pass_lines

recover_flightline_info

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

Raw airborne LiDAR data are collected along flightlines and multiple flightlines are typically merged into square tiles to simplify data handling and processing. Commonly the Point Source ID attribute is used to store information about the origin flightline of each point. However, sometimes this information is lost (e.g. during data format conversion) or is omitted from some data sets. This tool can be used to identify groups of points within a LiDAR file (input) that belong to the same flightline.

The tool works by sorting points based on their timestamp and then identifying points for which the time difference from the previous point is greater than a user-specified maximum time difference (max_time_diff), which are deemed to be the start of a different flightline. The operational assumption is that the time between consecutive points within a flightline is usually quite small (usually a fraction of a second), while the time between points in different flightlines is often relatively large (consider the aircraft turning time needed to take multiple passes of the study area). By default the maximum time difference is set to 5.0 seconds, although it may be necessary to increase this value depending on the characteristics of a particular data set.

The tool works on individual LiDAR tiles and the flightline identifiers will range from 0 to the number of flightlines detected within the tile, minus one. Therefore, the flightline identifier created by this tool will not extend beyond the boundaries of the tile and into adjacent tiles. That is, a flightline that extends across multiple adjacent LiDAR tiles may have different flightline identifiers used in each tile. The identifiers are intended to discriminate between flighlines within a single file. The flightline identifier value can be optionally assigned to the Point Source ID point attribute (pt_src_id), the User Data point attribute (user_data), and the red-green-blue point colour data (rgb) within the output file (output). At least one of these output options must be selected and it is possible to select multiple output options. Notice that if the input file contains any information within the selected output fields, the original information will be over-written, and therefore lost--of course, it will remain unaltered within the input file, which this tool does not modify. If the input file does not contain RGB colour data and the rgb output option is selected, the output file point format will be altered from the input file to accommodate the addition of RGB colour data. Flightlines are assigned random colours. The LAS User Data point attribute is stored as a single byte and, therefore, if this output option is selected and the input file contains more than 256 flightlines, the tool will assign the same flightline identifier to more than one flightline. It is very rare for this condition to be the case in a typical 1 km-square tiles. The Point Source ID attribute is stored as a 16-bit integer and can therefore store 65,536 unique flightline identifiers.

Outputting flightline information within the colour data point attribute can be useful for visualizing areas of flightline overlap within a file. This can be an important quality assurance/quality control (QA/QC) step after acquiring a new LiDAR data set.

Please note that because this tool sorts points by their timestamps, the order of points in the output file may not match that of the input file.

See Also

flightline_overlap, find_flightline_edge_points, LidarSortByTime

recreate_pass_lines

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to approximate the combine harvester swath pass lines from yield points. It is sometimes the case that either pass-line information is not stored in the point data created during harvesting, or that this information is lost. The yield_filter and yield_map tools however require information about the associated swath path for each point in the dataset. This tool can therefore serve as a pre-processing operation before running either of those more advanced mapping tools. It works by examining the geometry of nearby points and associating points with line features that observe a maximum angular change in direction (max_change_in_heading). The tool creates two output vectors, including a pass line vector (output) and a points vector (output_points). The points output contains a PASS_NUM field within its attribute tables that indicate the unique identifier associated with features. The line vector output contains an AVGYIELD attribute field, which provides the pass-line average of the input yield values (yield_field_name).

For a video tutorial on how to use the recreate_pass_lines, yield_filter and yield_map tools, see this YouTube video. There is also a blog that describes the usage of this tool on the WhiteboxTools homepage.

See Also

yield_filter, yield_map, reconcile_multiple_headers, remove_field_edge_points, yield_normalization

remove_field_edge_points

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to remove, or flag, most of the points along the edges from a crop yield data set. It is frequently the case that yield data collected along the edges of fields are anomalous in value compared with interior values. There are many reasons for this phenomenon, but one of the most common is that the header may be only partially full.

The user must specify the name of the input vector yield points data set (input), the output points vector (output), the average distance between passes (dist), in meters, and the maximum angular change in direction (max_change_in_heading), which is used to map pass lines (see also recreate_pass_lines).

For a video tutorial on how to use the remove_field_edge_points tool, see this YouTube video.

See Also

yield_filter, reconcile_multiple_headers, yield_map, recreate_pass_lines, yield_normalization

remove_raster_polygon_holes

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to remove holes in raster polygons. Holes are areas of background values (either zero or NoData), completely surrounded by foreground values (any value other than zero or NoData). Therefore, this tool can somewhat be considered to be the raster equivalent to the vector-based remove_polygon_holes tool. Users may optionally remove holes less than a specified threshold size (threshold), measured in grid cells. Hole size is determined using a clumping operation, similar to what is used by the clump tool. Users may also optionally specify whether or not to included 8-cell diagonal connectedness during the clumping operation (use_diagonals).

Some GIS professionals have previously used a closing operation to lessen the extent of polygon holes in raster data. A closing is a mathematical morphology operation that involves expanding the raster polygons using a dialation filter (maximum_filter), followed by a dialation filter (minimum_filter) on the resulting image. While this common image processing technique can be helpful for reducing the prevalance of polygon holes, it can also have considerable impact on non-hole features within the image. The remove_raster_polygon_holes tool, by comparison, will only affect hole features and does not impact the boundaries of other polygons at all. The following image compares the removal of polygon holes (islands in a lake polygon) using a closing operation (middle) calculated using an 11x11 convolution filter and the output of the remove_raster_polygon_holes tool. Notice how the convolution operation impacts the edges of the polygon, particularly in convex regions, compared with the remove_raster_polygon_holes.

Here is a video that demonstrates how to apply this tool to a classified Sentinel-2 multi-spectral satellite imagery data set.

See Also

closing, remove_polygon_holes, clump, generalize_classified_raster

Function Signature

def remove_raster_polygon_holes(self, input: Raster, threshold_size: int = sys.maxsize, use_diagonals: bool = False) -> Raster: ...

ridge_and_valley_vectors

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

This function can be used to extract ridge and channel vectors from an input digital elevation model (DEM). The function works by first calculating elevation percentile (EP) from an input DEM using a neighbourhood size set by the user-specified filter_size parameter. Increasing the value of filter_size can result in more continuous mapped ridge and valley bottom networks. A thresholding operation is then applied to identify cells that have an EP less than the user-specified ep_threshold (valley bottom regions) and a second thresholding operation maps regions where EP is greater than 100 - ep_threshold (ridges). Each of these ridge and valley region maps are also multiplied by a slope mask created by identify all cells with a slope greater than the user-specified slope_threshold value, which is set to zero by default. This second thresholding can be helpful if the input DEM contains extensive flat areas, which can be confused for valleys otherwise. The filter_size and ep_threshold parameters are somewhat dependent on one another. Increasing the filter_size parameter generally requires also increasing the value of the ep_threshold. The ep_threshold can take values between 5.0 and 50.0, where larger values will generally result in more extensive and continuous mapped ridge and valley bottom networks. For many DEMs, a value on the higher end of the scale tends to work best.

After applying the thresholding operations, the function then applies specialized shape generalization, line thinning, and vectorization alorithms to produce the final ridge and valley vectors. The user must also specify the value of the min_length parameter, which determines the minimum size, in grid cells, of a mapped line feature. The function outputs a tuple of two vector, the first being the ridge network and the second vector being the valley-bottom network.

Code Example

from whitebox_workflows import WbEnvironment

# Set up the WbW environment
license_id = 'my-license-id' # Note, this tool requires a license for WbW-Pro
wbe = WbEnvironment(license_id)
try:
    wbe.verbose = True
    wbe.working_directory = '/path/to/data'

    # Read the input DEM
    dem = wbe.read_raster('DEM.tif')

    # Run the operation
    ridges, valleys = wbe.ridge_and_valley_vectors(dem, filter_size=21, ep_threshold=45.0, slope_threshold=1.0, min_length=25)
    wbe.write_vector(ridges, 'ridges_lines.shp')
    wbe.write_vector(valley, 'valley_lines.shp')

    print('Done!')
except Exception as e:
  print("Error: ", e)
finally:
    wbe.check_in_license(license_id)
See Also:

extract_valleys

Function Signature

def ridge_and_valley_vectors(self, dem: Raster, filter_size: int = 11, ep_threshold: float = 30.0, slope_threshold: float = 0.0, min_length: int = 20) -> Tuple[Raster, Raster]:

ring_curvature

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the ring curvature, which is the product of horizontal excess and vertical excess curvatures (Shary, 1995), from a digital elevation model (DEM). Like rotor, ring curvature is used to measure flow line twisting. Ring curvature has values equal to or greater than zero and is measured in units of m-2.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1995) Land surface in gravity points classification by a complete system of curvatures. Mathematical Geology 27: 373–390.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

rotor, minimal_curvature, maximal_curvature, mean_curvature, gaussian_curvature, profile_curvature, tangential_curvature

river_centerlines

Note this tool is part of a WhiteboxTools extension product. Please visit Whitebox Geospatial Inc. for information about purchasing a license activation key (https://www.whiteboxgeo.com/extension-pricing/).

This tool can map river centerlines, or medial-lines, from input river rasters (input). The input river (or water) raster is often derived from an image classification performed on multispectral satellite imagery. The river raster must be a Boolean (1 for water, 0/NoData for not-water) and can be derived either by reclassifying the classification output, or derived using a 1-class classification procedure. For example, using the parallelepiped_classification tool, it is possible to train the classifier using water training polygons, and all other land classes will simply be left unclassified. It may be necessary to perform some pre-processing on the water Boolean raster before input to the centerlines tool. For example, you may need to remove smaller water polygons associated with small lakes and ponds, and you may want to remove small islands from the remaining water features. This tool will often create a bifurcating vector path around islands within rivers, even if those islands are a single-cell in size. The remove_raster_polygon_holes tool can be used to remove islands in the water raster that are smaller than a user-specified size. The user must also specify the minimum line length (min_length), which determines the level of detail in the final rivers map. For example, in the first iamge below, a value of 30 grid cells was used for the min_length parameter, while a value of 5 was used in the second image, which possesses far more (arguably too much) detail.

Lastly, the user must specify the radius parameter value. At times, the tool will be able to connect distant water polygons that are part of the same feature and this parameter determines the size of the search radius used to identify separated line end-nodes that are candidates for connection. It is advisable that this value not be set too high, or else unexpected connections may be made between unrelated water features. However, a value of between 1 and 5 can produce satisfactory results. Experimentation may be needed to find an appropriate value for any one data set however. The image below provides an example of this characteristic of the tool, where the resulting vector stream centerline passes through disconnected raster water polygons in the underlying input image in four locations.

Here is a video that demonstrates how to apply this tool to map river center-lines taken from a water raster created by classifying a Sentinel-2 multi-spectral satellite imagery data set.

See Also

parallelepiped_classification, remove_raster_polygon_holes

Function Signature

def river_centerlines(self, raster: Raster, min_length: int = 3, search_radius: int = 9) -> Vector: ...

rotor

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the spatial pattern of rotor, which describes the degree to which a flow line twists (Shary, 1991), from a digital elevation model (DEM). Rotor has an unbounded range, with positive values indicating that a flow line turns clockwise and negative values indicating flow lines that turn counter clockwise (Florinsky, 2017). Rotor is measured in units of m-1.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1991) The second derivative topographic method. In: Stepanov IN (ed) The Geometry of the Earth Surface Structures. Pushchino, USSR: Pushchino Research Centre Press, 30–60 (in Russian).

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

ring_curvature, profile_curvature, tangential_curvature, plan_curvature, mean_curvature, gaussian_curvature, minimal_curvature, maximal_curvature

shadow_animation

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool creates an interactive animated GIF of shadows based on an input digital surface model (DSM). The shadow model is based on the modelled positions of the sun throughout a user-specified date (date) sampling at a regular interval (interval), in minutes. Similar to the time_in_daylight tool, this tool uses calculated horizon angle (horizon_angle) values and a solar position model to determine which grid cells are located in shadow areas due to distant obsticles. The calculation of horizon angle, requires the user input a maximum search distance parameter (max_dist).

The location parameter (location) should take the form Lat/Long/UTC-offset, e.g. 43.5448/-80.2482/-4/. Note, the location need only be approximate; the postion of the central location of the input DSM raster should suffice.

The output (output) of this tool is an HTML file, containing the interactive GIF animation. Users are able to zoom and pan around the displayed DEV animation. The DSM may be rendered in one of several available palettes (palette) suitable for visualization topography. The user must also specify the image height (height) in the output file, the time delay (delay, in milliseconds) used in the GIF animation, and an optional label (label), which will appear in the upper lefthand corner. Note that the output is simply HTML, CSS, javascript code, and a GIF file, which can be readily embedded in other documents.

Users should be aware that the outut GIF can be very large in size, depending on the size of the input DEM file. To reduce the file size of the output, it may be desirable to coarsen the input DEM resolution using image resampling (resample).

The following is an example of what the output of this tool looks like. Click the image for an interactive example.

For more information about this tool, see this blog on the WhiteboxTools homepage.

See Also

shadow_image, time_in_daylight, horizon_angle, lidar_digital_surface_model

shadow_image

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool generates a raster of shadow areas based on an input digital surface model (DSM). This shadow model is based on the calculated positions of the sun throughout a user-specified date (date), sampling at a regular interval (interval), in minutes. Similar to the time_in_daylight tool, this tool uses calculated horizon angle (horizon_angle) values and a solar position model to determine which grid cells are located in shadow areas due to distant obsticles. The calculation of horizon angle, requires the user input a maximum search distance parameter (max_dist).

The user must specify the date (date), time (time), and location (location) of the input DSM. The date should have the format DD/MM/YYYY, e.g. 27/11/1976. The time should have the format HH::MM, e.g. 03:15AM or 14:30. The location parameter should take the form Lat/Long/UTC-offset, e.g. 43.5448/-80.2482/-4/. Note, the location need only be approximate; the postion of the central location of the input DSM raster should suffice.

The output (output) of this tool is a raster. If a palette (palette) is chosen, then the output raster will be a colour composite image, containing a hysometrically tinted (i.e. elevation coloured) shadow model. The DSM may be rendered in one of several available palettes (palette) suitable for visualization topography. If the palette is set to 'none', the output image will not be a colour composite, but rather, will be a 16-bit integer raster, and should be displayed in a grey-scale palette.

The following is an example of what the output of this tool looks like.

For more information about this tool, see this blog on the WhiteboxTools homepage.

See Also

shadow_animation, time_in_daylight, horizon_angle, lidar_digital_surface_model, hypsometrically_tinted_hillshade

shape_index

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the shape index (Koenderink and van Doorn, 1992) from a digital elevation model (DEM). This variable ranges from -1 to 1, with positive values indicative of convex landforms, negative values corresponding to concave landforms (Florinsky, 2017). Absolute values from 0.5 to 1.0 are associated with elliptic surfaces (hills and closed depressions), while absolute values from 0.0 to 0.5 are typical of hyperbolic surface form (saddles). Shape index is a dimensionless variable and has utility in landform classification applications.

Koenderink and vsn Doorn (1992) make the following observations about the shape index:

  • Two shapes for which the shape index differs merely by sign represent complementary pairs that will fit together as ‘stamp’ and ‘mould’ when suitably scaled;

  • The shape for which the shape index vanishes - and consequently has indeterminate sign - represents the objects which are congruent to their own moulds;

  • Convexities and concavities find their places on opposite sides of the shape scale. These basic shapes are separated by those shapes which are neither convex nor concave, that are the saddle-like objects. The transitional shapes that divide the convexities/concavities from the saddle-shapes are the cylindrical ridge and the cylindrical rut.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Koenderink, J. J., and Van Doorn, A. J. (1992). Surface shape and curvature scales. Image and vision computing, 10(8), 557-564.

curvedness, minimal_curvature, maximal_curvature, tangential_curvature, profile_curvature, mean_curvature, gaussian_curvature

sieve

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

The sieve function removes individual objects in a class map that are less than a threshold area, in grid cells. Pixels contained within the removed small polygons will be replaced with the nearest remaining class value. This operation is common when generalizing class maps, e.g. those derived from an image classification. Thus, this tool provides a similar function to the generalize_classified_raster and generalize_with_similarity functions.

See Also:

generalize_classified_raster, generalize_with_similarity

Function Signature

def sieve(self, input_raster: Raster, threshold: int = 1, zero_background: bool = False) -> Raster: ...

sky_view_factor

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

This tool calculates the sky-view factor (SVF) from an input digital elevation model (DEM) or digital surface model (DSM). The SVF is the proportion of the celestial hemisphere above a point on the earth's surface that is not obstructed by the surrounding land surface. It is often used to model the diffuse light that is received at the surface and has also been applied as a relief-shading technique (Böhner et al., 2009; Zakšek et al., 2011).

The user must specify an input DEM (dem), the azimuth fraction (az_fraction), the maximum search distance (max_dist), and the height offset of the observer (observer_hgt_offset). The input DEM should usually be a digital surface model (DSM) that contains significant off-terrain objects. Such a model, for example, could be created using the first-return points of a LiDAR data set, or using the lidar_digital_surface_model tool. The azimuth fraction should be an even divisor of 360-degrees and must be between 1-45 degrees.

The tool operates by calculating horizon angle (see horizon_angle) rasters from the DSM based on the user-specified azimuth fraction (az_fraction). For example, if an azimuth fraction of 15-degrees is specified, horizon angle rasters would be calculated for the solar azimuths 0, 15, 30, 45... A horizon angle raster evaluates the vertical angle between each grid cell in a DSM and a distant obstacle (e.g. a mountain ridge, building, tree, etc.) that obscures the view in a specified direction. In calculating horizon angle, the user must specify the maximum search distance (max_dist), in map units, beyond which the query for higher, more distant objects will cease. This parameter strongly impacts the performance of the function, with larger values resulting in significantly longer processing-times.

This tool uses the method described by Zakšek et al. (2011) to calculate SVF, which differs slightly from the method described by Böhner et al. (2009), as implemented in the Saga software. Most notably the Whitebox implementation does not involve local surface slope gradient and is closer in definition to the Saga 'Visible Sky' index.

There are other significant differences between the Whitebox and Saga implementations of SVF. For a given maximum search distance, the Whitebox SVF will be substantially faster to calculate. Furthermore, the Whitebox implementation has the ability to specify a height offset of the observer from the ground surface, using the observer_hgt_offset parameter. For example, the following image shows the spatial pattern derived from a LiDAR DSM using observer_hgt_offset = 0.0:

Notice that there are several places, plarticularly on the flatter rooftops, where the local noise in the LiDAR DEM, associated with the individual scan lines, has resulted in a somewhat noisy pattern in the output. By adding a small height offset of the scale of this noise variation (0.15 m), we see that most of this noisy pattern is removed in the output below:

This feature makes the function more robust against DEM noise. As another example of the usefulness of this additional parameter, in the image below, the observer_hgt_offset parameter has been used to measure the pattern of the index at a typical human height (1.7 m):

Notice how overall visiblility increases at this height.

References

Böhner, J. and Antonić, O., 2009. Land-surface parameters specific to topo-climatology. Developments in soil science, 33, pp.195-226.

Zakšek, K., Oštir, K. and Kokalj, Ž., 2011. Sky-view factor as a relief visualization technique. Remote sensing, 3(2), pp.398-415.

See Also

average_horizon_distance, horizon_area, openness, lidar_digital_surface_model, horizon_angle

Function Signature

def sky_view_factor(self, dem: Raster, az_fraction: float = 5.0, max_dist: float = float('inf'), observer_hgt_offset: float = 0.0) -> Raster: ...

skyline_analysis

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

This function performs a skyline analysis for one or more observation points based on the terrain of an underlying digital elevation model (DEM). There are two outputs of the analysis, including an HTML report and a vector containing the horizon polygons associated with each observation point. The analysis report includes a summary of key characteristics of the skyline for each point, including the average zenith angle, the average horizon distance, the horizon area, the average skyline elevation, the standard deviation of skyline elevation, and the sky-view factor. The report will also include two radial charts, including the zenith angle plot and the horizon distance plot, for each observation point.

The horizon area vector output traces the skyline and is saved as a PolygonZ shapetype, with z-values taken from the input DEM surface and measures (M-values) derived from the zenith angle values. This can be thought of as an approxiate vector viewshed for the observation points, except that a viewshed may well contain internal occlusions that the horizon polygon does not. Note that it is best to use an input digital surface model, rather than a bare-earth DEM for this function.

The user must specify the input DEM and vector points file, the name of the output HTML report (which will be automatically displayed if verbose=True), the maximum distance (max_dist), the observer height (observer_hgt_offset), whether the output horizon polygon should of the PolygonZ ShapeType (if set to False the output will be of the PolyLineZ ShapeType), and the azimuth fraction (az_fraction), which determines the angular resolution of the analysis, with a default value of 1.0.

Note that the input DEM should use a projected spatial referencing system.

See Also

sky_view_factor, average_horizon_distance, openness, lidar_digital_surface_model, horizon_angle

Function Signature

def skyline_analysis(self, dem: Raster, points: Vector, output_html_file: str, max_dist: float = float('inf'), observer_hgt_offset: float = 0.0, output_as_polygons: bool = True, az_fraction: float = 1.0) -> Vector: ...

slope_vs_aspect_plot

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool creates a slope vs. aspect plot for an input digital elevation model, or DEM (input). Similar to a slope vs. elevation analysis (SlopeVsElevationPlot), the slope-aspect relation can reveal the basic topographic character of a site. The output of this analysis is an HTML document (output) that contains the slope-aspect chart, which is a radial line plot. The plot displays the median and interquartile range of slope values for the range of aspect values from 0 - 360 degrees. In reality, the aspect range is binned and the user must specify the bin size (bin_size). As slopes becomes quite shallow, the numerical instability in aspect becomes apparent, due to the relatively small signal-to-noise ratio in these areas of the input DEM. These shallow-gradient grid cells can have an out-sized impact on the shape of the slope-aspect relation. Therefore, users can specify to ignore slopes less than a certain threshold minimum slope (min_slope).

In interpreting the slope-aspect plots output by this tool, users should take note of asymmetries in polygonal paths taken by the percentile slope values, asymmetries in the range of slopes (i.e. the interquartile range), and anisotropy patterns (i.e. non-circularity or oval-shaped patterns). For example, asymmetries in the patterns may be indicative of landscape processes of interest, such as the differential energy balances experienced by north- and south-facing slopes at high latitudes. Increased rates of weathering on slopes with more direct sunlight at higher latitudes can result in flatter hillslopes. Asymmetry in the slope-aspect relation may also be indicative of DEM error and can be used as a quality control procedure, particularly for InSAR DEMs. Anisotropy in the slope-aspect relation may indicate a characteristic of the bedrock geology or the drainage structure of the landscape. The tool will also output the elongation ratio, a measure of anisotropy, of the mapped percentile polygons in a table.

The following are some examples of the output plots. In actuality, the outputs of the tool are interactive plots.

You may wish to smooth your DEM prior to analysis with this tool, in order to emphasis longer-scale patterns in the landscape. We would recommend using a method such as the feature_preserving_smoothing tool for this purpose.

The Z conversion factor (z_factor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z conversion factor. If the DEM is in the geographic coordinate system (latitude and longitude), the following equation is used:

zfactor = 1.0 / (111320.0 x cos(mid_lat))

where mid_lat is the latitude of the centre of each raster row, in radians.

See Also

SlopeVsElevationPlot, feature_preserving_smoothing

smooth_vegetation_residual

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can smooth the roughness due to residual vegetation cover in LiDAR digital elevation models (DEMs). Sometimes when LiDAR data are collected under heavy forest cover, particularly conifer species, the DEM will contain substantial roughness, even if it is interpolated using last-return points only. This tool can be used to reduce the roughness of the ground surface under these conditions. It works by identifying grid cells that possess deviation in mean elevation (DEV, DevFromMeanElev) values that are higher than a specified threshold value (dev_threshold) for tested scales less than a specified threshold (scale_threshold). DEV is measured for the input DEM (input) using filter radii from 1 to a user-specified maximum (max_scale). The identified grid cells are then masked out and their elevations are re-interpolated using the surrounding, non-masked values.

This method can work well under some conditions, and will further benefit from multiple passes of the tool, i.e. run the tool using one set of parameters and then use the output (output) as the input for the second pass. Alternative approaches include use of the remove_off_terrain_objects tool, using low-pass filters such as the feature_preserving_smoothing tool, or, if the point-cloud source data are available, classifying the ground points using lidar_ground_point_filter and excluding non-ground points from the interpolation.

The following image shows an image of a DEM that is badly impacted by heavy forest cover, with obvious vegetation residual roughness.

This image shows the impact of two-passes of the smooth_vegetation_residual tool.

See Also

remove_off_terrain_objects, feature_preserving_smoothing, lidar_ground_point_filter, DevFromMeanElev

sort_lidar

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to sort the points in an input LiDAR file (input) based on their properties with respect to one or more sorting criteria (criteria). The sorting criteria may include: the x, y or z coordinates (x, y, z), the intensity data (intensity), the point class value (class), the point user data field (user_data), the return number (ret_num), the point source ID (point_source_id), the point scan angle data (scan_angle), the scanner channel (scanner_channel; LAS 1.4 datasets only), and the acquisition time (time). The following is an example of a complex sorting criteria statement that includes multiple criteria:

x 100.0, y 100.0, z 10.0, scan_angle

Criteria should be separated by a comma, semicolon, or pipe (|). Each criteria may have an associated bin value. In the example above, point x values are sorted into bins of 100 m, which are then sorted by y values into bins of 100 m, and sorted by point z values into bins of 10 m, and finally sorted by their scan_angle.

Sorting point values can have a significant impact on the compression rate when using certain compressed LiDAR data formats (e.g. LAZ, zLidar). Sorting values can also improve the visualization speed in some rendering software.

Note that if the user does not specify the optional input LiDAR file, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful for processing a large number of LiDAR files in batch mode. When this batch mode is applied, the output file names will be the same as the input file names but with a '_sorted' suffix added to the end.

See Also

LasToLaz, split_lidar, filter_lidar, modify_lidar

split_lidar

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to split an input LiDAR file (input) into a series of output files, placing points into each output based on their properties with respect to a grouping criterion (criterion). Points can be grouped based on a specified the number of points in the output file (num_pts; note the last file may contain fewer points), the x, y or z coordinates (x, y, z), the intensity data (intensity), the point class value (class), the point user data field (user_data), the point source ID (point_source_id), the point scan angle data (scan_angle), and the acquisition time (time). Points are binned into groupings based on a user-specified interval value (interval). For example, if an interval of 50.0 is used with the z criterion, a series of files will be output that are elevation bands of 50 m. The user may also optionally specify the minimum number of points needed before a particular grouping file is saved (min_pts). The interval value is not used for the class and point_source_id criteria.

With this tool, a single input file can generate many output files. The names of the output files will be reflective of the point attribute used for the grouping and the bin. For example, running the tool with the on an input file named my_file.las using intensity criterion and with an interval of 1000 may produce the following files:

  • my_file_intensity0.las
  • my_file_intensity1000.las
  • my_file_intensity2000.las
  • my_file_intensity3000.las
  • my_file_intensity4000.las

Where the number after the attribute (intensity, in this case) reflects the lower boundary of the bin. Thus, the first file contains all of the input points with intensity values from 0 to just less than 1000.

Note that if the user does not specify the optional input LiDAR file, the tool will search for all valid LiDAR (*.las, *.laz, *.zlidar) files contained within the current working directory. This feature can be useful for processing a large number of LiDAR files in batch mode. When this batch mode is applied, the output file names will be the same as the input file names but with a suffix added to the end reflective of the split criterion and value (see above).

See Also

sort_lidar, filter_lidar, modify_lidar, lidar_elevation_slice

svm_classification

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a support vector machine (SVM) binary classification using multiple predictor rasters (inputs), or features, and training data (training). SVMs are a common class of supervised learning algorithms widely applied in many problem domains. This tool can be used to model the spatial distribution of class data, such as land-cover type, soil class, or vegetation type. The training data take the form of an input vector Shapefile containing a set of points or polygons, for which the known class information is contained within a field (field) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space. Note that the svm_regression tool can be used to apply the SVM method to the modelling of continuous data.

The user must specify the values of three parameters used in the development of the model, the c parameters (-c), gamma (gamma), and the tolerance (tolerance). The c-value is the regularization parameter used in model optimization. The gamma parameter defines the radial basis function (Gaussian) kernel parameter. The tolerance parameter controls the stopping condition used during model optimization.

The tool splits the training data into two sets, one for training the classifier and one for testing the classification. These test data are used to calculate the overall accuracy and Matthew correlation coefficient (MCC). The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, the tool behaves stochastically, and will result in a different model each time it is run.

Note that the output image parameter (output) is optional. When unspecified, the tool will simply report the model accuracy statistics, allowing the user to experiment with different parameter settings and input predictor raster combinations to optimize the model before applying it to classify the whole image data set.

Like all supervised classification methods, this technique relies heavily on proper selection of training data. Training sites are exemplar areas/points of known and representative class value (e.g. land cover type). The algorithm determines the feature signatures of the pixels within each training area. In selecting training sites, care should be taken to ensure that they cover the full range of variability within each class. Otherwise the classification accuracy will be impacted. If possible, multiple training sites should be selected for each class. It is also advisable to avoid areas near the edges of class objects (e.g. land-cover patches), where mixed pixels may impact the purity of training site values.

After selecting training sites, the feature value distributions of each class type can be assessed using the evaluate_training_sites tool. In particular, the distribution of class values should ideally be non-overlapping in at least one feature dimension.

The SVM algorithm is based on the calculation of distances in multi-dimensional space. Feature scaling is essential to the application of SVM-based modelling, especially when the ranges of the features are different, for example, if they are measured in different units. Without scaling, features with larger ranges will have greater influence in computing the distances between points. The tool offers three options for feature-scaling (scaling), including 'None', 'Normalize', and 'Standardize'. Normalization simply rescales each of the features onto a 0-1 range. This is a good option for most applications, but it is highly sensitive to outliers because it is determined by the range of the minimum and maximum values. Standardization rescales predictors using their means and standard deviations, transforming the data into z-scores. This is a better option than normalization when you know that the data contain outlier values; however, it does does assume that the feature data are somewhat normally distributed, or are at least symmetrical in distribution.

Because the SVM algorithm calculates distances in feature-space, like many other related algorithms, it suffers from the curse of dimensionality. Distances become less meaningful in high-dimensional space because the vastness of these spaces means that distances between points are less significant (more similar). As such, if the predictor list includes insignificant or highly correlated variables, it is advisable to exclude these features during the model-building phase, or to use a dimension reduction technique such as principal_component_analysis to transform the features into a smaller set of uncorrelated predictors.

Memory Usage

The peak memory usage of this tool is approximately 8 bytes per grid cell × # predictors.

See Also

random_forest_classification, knn_classification, parallelepiped_classification, evaluate_training_sites, principal_component_analysis

svm_regression

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a supervised support vector machine (SVM) regression analysis using multiple predictor rasters (inputs), or features, and training data (training). SVMs are a common class of supervised learning algorithms widely applied in many problem domains. This tool can be used to model the spatial distribution of continuous data, such as soil properties (e.g. percent sand/silt/clay). The training data take the form of an input vector Shapefile containing a set of points for which the known outcome data is contained within a field (field) of the attribute table. Each grid cell defines a stack of feature values (one value for each input raster), which serves as a point within the multi-dimensional feature space. Note that the svm_classification tool can be used to apply the SVM method to the modelling of categorical data.

The user must specify the c-value (-c), the regularization parameter used in model optimization, the epsilon-value (eps), used in the development of the epsilon-SVM regression model, and the gamma-value (gamma), which is used in defining the radial basis function (Gaussian) kernel parameter.

The tool splits the training data into two sets, one for training the model and one for testing the prediction. These test data are used to calculate the regression accuracy statistics, as well as to estimate the variable importance. The test_proportion parameter is used to set the proportion of the input training data used in model testing. For example, if test_proportion = 0.2, 20% of the training data will be set aside for testing, and this subset will be selected randomly. As a result of this random selection of test data, the tool behaves stochastically, and will result in a different model each time it is run.

Note that the output image parameter (output) is optional. When unspecified, the tool will simply report the model accuracy statistics and variable importance, allowing the user to experiment with different parameter settings and input predictor raster combinations to optimize the model before applying it to model the outcome variable across the whole region defined by image data set.

The SVM algorithm is based on the calculation of distances in multi-dimensional space. Feature scaling is essential to the application of SVM modelling, especially when the ranges of the features are different, for example, if they are measured in different units. Without scaling, features with larger ranges will have greater influence in computing the distances between points. The tool offers three options for feature-scaling (scaling), including 'None', 'Normalize', and 'Standardize'. Normalization simply rescales each of the features onto a 0-1 range. This is a good option for most applications, but it is highly sensitive to outliers because it is determined by the range of the minimum and maximum values. Standardization rescales predictors using their means and standard deviations, transforming the data into z-scores. This is a better option than normalization when you know that the data contain outlier values; however, it does does assume that the feature data are somewhat normally distributed, or are at least symmetrical in distribution.

Because the SVM algorithm calculates distances in feature-space, like many other related algorithms, it suffers from the curse of dimensionality. Distances become less meaningful in high-dimensional space because the vastness of these spaces means that distances between points are less significant (more similar). As such, if the predictor list includes insignificant or highly correlated variables, it is advisable to exclude these features during the model-building phase, or to use a dimension reduction technique such as principal_component_analysis to transform the features into a smaller set of uncorrelated predictors.

Memory Usage

The peak memory usage of this tool is approximately 8 bytes per grid cell × # predictors.

See Also

svm_classification, random_forest_regression, knn_regression, principal_component_analysis

topo_render

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool is used to create pseudo-3D rendering from an input DEM, for the purpose of effective topographic visualization. The tool simulated direct radiation, diffuse radiation, and light attenuation to create an effective topographic visualization. The user must specify the input digital elevation model (dem) and output (output) file names. One of several named palettes (palette) may also be chosen, including 'atlas', 'high_relief', 'arid', 'soft', 'earthtones', 'muted', 'light_quant', 'purple', 'viridi', 'gn_yl', 'pi_y_g', 'bl_yl_rd', 'deep', 'imhof', and 'white'. The user may optionally reverse the palette (rev_palette), although this will generally not be required since the palettes are designed to work well with topographic data as they are.

The user must also specify a number of parameters related to the lighting of the surface. These include the light source direciton (az; 0-360) and altitude (alt; 0-90), both of which describe the 3D light source location in decimal degrees. The light attenuation (attenuation) describes the rate at which the light dims away from the source, effectively applying a gradient across the image. Values of this parameter range from 0-1, with appropriate values in the 0.0 (no attenuation) to 0.7 range. The ambient light parameter (ambient_light) is used to describe how much background (diffuse) light there is, which allows for details to be discernable within shadow areas. Values of this parameter also range from 0-1, although generally much lower values ~0.2, produce good results. Experimentation with each of the lighting parameter values may be needed to create a final map.

The resulting output image will have shadows cast beyond the original input DEM's grid, futher creating the illusion of a 3D surface suspended above a background plane (see examples below). The user may accentuate this effect by setting the vertical distance between the topographic surface and the plane (background_hgt_offset). Larger values of this parameter will result in a greater distance, and the parameter values are in the z-units of the input DEM. If the DEM contains NoData values, these sites will appear to cut through to the background plane. In fact, the user may optionally include a clipping polygon (polygon) and only the parts of the DEM that are within this polygon will be displayed. This is useful if, for example, you wish to render an individual watershed. The user may specify the colour of the background plane (background_clr), as a string of RGB or RGBA values, e.g. '[255, 240, 200, 255]'. The default colour is white, which may appear slightly greyed if a non-zero light attentuation value is specified.

Lastly, the user must specify an elevation multiplier (z_factor) parameter, with a default of 1.0. This can be useful for applying a vertical exageration (values greater than 1.0) to the elevation surface for enchanced topographic relief. This may be important when applying this tool in relatively low-to-moderate relief locations, or when applying it to very large spatial extents. Please note, this tool is suitable for applying to DEMs in geographic coordinates (latitude and longitude) or projected coordinate systems.

The image that is created by this tool is a GeoTiff and can be opened in a GIS. This means that it is possible to overlay other layers on top. For example, it is possible to use the 'white' palette to create a rendered topography and then to overlay, transparently, a satellite image or air photo on top within a GIS. In the case of a fine-resolution image, however, it is important to remember that typically shadows will be visible in these images, that can be contrary to those generated by the rendered topography, which is not ideal for visualization.

The following examples demonstrate how the output of this may appear.

See Also

shadow_image, shadow_animation, time_in_daylight, horizon_angle, hypsometrically_tinted_hillshade

topographic_position_animation

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool creates an interactive animation that demonstrates the variation in deviation from mean elevation (DEV, DevFromMeanElev) as scale increases across a range for an input digital elevation model (input). DEV is calculated as the difference between the elevation of each grid cell and the mean elevation of the centering local neighbourhood, normalized by standard deviation and is a measure of local topographic position. DEV is useful for highlighting locally prominent (either elevated or low-lying) locations within a landscape. Topographic position animations are extemely useful for interpreting landscape geomorphic structure across a range of scales.

The set of scales for which DEV is measured (using varying filter sizes) is determined by the three user-specified parameters, including min_scale, num_steps, and step_nonlinearity. Experience with DEV scale signatures has shown that it is highly variable at shorter scales and changes more gradually at broader scales. Therefore, a nonlinear scale sampling interval is used by this tool to ensure that the scale sampling density is higher for short scale ranges and coarser at longer tested scales, such that:

ri = rL + (i - rL)p

Where ri is the filter radius for step i, rL is the lower range of filter sizes (min_scale), and p is the nonlinear scaling factor (step_nonlinearity).

The tool can be run in one of two modes: using regular DEV calculations, or using DEVmax (max_elevation_deviation), a multiscale version of DEV that outputs the maximum absolute value of DEV encountered across a range of tested scales. Use the dev_max flag to run the tool in DEVmax mode.

The output (output) of this tool is an HTML file, containing the interactive GIF animation. Users are able to zoom and pan around the displayed DEV animation. The DEV images may be rendered in one of several available palettes (palette) suitable for visualization DEV. The output DEV/DEVmax animation will also be hillshaded to further enchance topographic interpretation. The user must also specify the image height (height) in the output file, the time delay (delay, in milliseconds) used in the GIF animation, and an optional label (label), which will appear in the upper lefthand corner. Note that the output is simply HTML, CSS, javascript code, and a GIF file, which can be readily embedded in other documents.

Users should be aware that the outut GIF can be very large in size, depending on the size of the input DEM file. To reduce the file size of the output, it may be desirable to coarsen the input DEM resolution using image resampling (resample).

The following is an example of what the output of this tool looks like. Click the image for an interactive example.

For more information about this tool and example outputs, see this blog on the WhiteboxTools homepage.

See Also

DevFromMeanElev, max_elevation_deviation

topological_breach_burn

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool performs a specialized form of stream burning, i.e. the practice of forcing the surface flow paths modelled from a digital elevation model (DEM) to match the pathway of a mapped vector stream network. Stream burning is a common flow enforcement technique used to correct surface drainage patterns derived from DEMs. The technique involves adjusting the elevations of grid cells that are coincident with the features of a vector hydrography layer, usually simply by lowering stream cell elevations by a constant offset value. This simple approach is used by the fill_burn tool, which suffers greatly from topological errors resulting from the mismatched scales of the hydrography and DEM data sets. These topological errors, which occur during the rasterization of the stream vector, result in inappropriate stream cell adjacencies (where two stream links appear to be beside one another in the stream raster with no space between) and collisions (where two stream links occupy the same cell in the stream raster). The topological_breach_burn method uses total upstream channel length (TUCL) to prune the vector hydrography layer to a level of detail that matches the raster DEM grid resolution. Network pruning reduces the occurrence of erroneous stream piracy caused by the rasterization of multiple stream links to the same DEM grid cell. The algorithm also restricts flow, during the calculation of the D8 flow pointer raster output, within individual stream reaches, further reducing erroneous stream piracy. In situations where two vector stream features occupy the same grid cell, the new tool ensures that the larger stream, designated by higher TUCL, is given priority. TUCL-based priority minimizes the impact of the topological errors that occur during the stream rasterization process on modelled regional drainage patterns. Lindsay (2016) demonstrated that the topological_breach_burn method produces highly accurate and scale-insensitive drainage patterns and watershed boundaries compared with fill_burn.

The tool requires two input layers, including the DEM (dem) and mapped vector stream network (streams). Note that these two inputs must share the same map projection. The tool also produces four output rasters, including:

  1. A rasterized version of the pruned stream network (out_streams). Network pruning is based on a TUCL threshold that is calculated as the optimal value that satisfies the combined objectives of maximizing the length of maintained streams and minimizing the number of collisions/adjacencies in the rasterized network. This optimization process is carried out using TUCL and stream length data calculated for each tributary in the network. A tributary connects a channel head and a downstream confluence/outlet in the stream network. Tributaries are often composed of multiple stream links (lengths of streams between upstream-downstream heads/confluences) and can have tributaries of their own. At each confluence in the stream network, the tributary identifier that carries on downstream is associated with the upstream link with the larger TUCL value (a surrogate for stream/drainage area size). The output streams raster shows stream cells remaining after the pruning process along with their unique tributary identifier value. Lower tributary IDs are associated with larger streams, with the lowest valued tributary in a network associated with the main-stem, based on the TUCL criterion for modelling stream size. The main functions of this output are for the user to examine the extent of network pruning used by the tool and to evaluate the network structure described by the tributary IDs. Notice that pruning will be more extensive with a greater mismatch between the scales of the input mapped stream network and the DEM.

  2. The stream-burned DEM (out_dem). This DEM will have constantly decreasing elevation values (i.e. breached) along stream tributaries from their channel heads all the way to their eventual outlet points. The tool does not use a constant elevation decrement value. Additionally, all topographic depressions that are located on the hillslopes will be filled; you may pre-process the input DEM with a length-restricted run of the breach_depressions_least_cost tool if you do not wish to fill depressions. This output DEM is probably the least useful of the four output rasters produced by this tool. It is created and output simply because other stream-burning tools produce a burned-in DEM. As indicated above, one of the mechanisms by which this tool improves the topological representation of flow through the rasterized stream network is to ensure that preferential flow path connections are made among stream cells of the same tributary ID and, where there are collisions, to ensure that larger tributaries (lower ID value) is preferred. However, this cannot be represented merely with the elevations contained within this stream-burned DEM. If, for example, you were to run a flow pointer/accumulation operation on the produced DEM, you will not get the exact same outputs as the D8 pointer and flow accumulation rasters produced by this tool, since the D8 tools will not be able to account for the within-tributary flow enforcement used by topological_breach_burn using the elevation values contained within the DEM alone.

  3. The D8 flow pointer raster (out_dir). This raster output contains the D8-style pointer values (see the d8_pointer tool for an explanation of pointer value interpretation) and can be used as an input to many of the other hydrological tools in Whitebox. It does capture the topological flow-enforcement within tributaries described above.

  4. The D8 flow accumulation raster (out_fa). This raster can be optionally output from the tool if the user specifies a value for this parameter. When specified, the tool will run the standard D8 flow accumulation operation using the flow pointer raster above as input. Note that this raster will be exactly the same as what would be produced should you input the D8 flow pointer produced by this tool to the D8FlowAccumualation tool (thus this output is optional).

The user must lastly specify the snap distance value, in meters. This parameter allows the tool to identify the linkage between stream segments when their end nodes are not perfectly aligned. One may also choose to run the repair_stream_vector_topology tool prior to this tool to resolve any misalignment in the input streams vector.

Reference

Lindsay JB. 2016. The practice of DEM stream burning revisited. Earth Surface Processes and Landforms, 41(5): 658–668. DOI: 10.1002/esp.3888

Saunders, W. 1999. Preparation of DEMs for use in environmental modeling analysis, in: ESRI User Conference. pp. 24-30.

See Also

fill_burn, breach_depressions_least_cost, prune_vector_streams, repair_stream_vector_topology, vector_stream_network_analysis

unsphericity

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the spatial pattern of unsphericity curvature, which describes the degree to which the shape of the topographic surface is nonspherical at a given point (Shary, 1995), from a digital elevation model (DEM). It is calculated as half the difference between the maximal_curvature and the minimal_curvature. Unsphericity has values equal to or greater than zero and is measured in units of m-1. Larger values indicate locations that are less spherical in form.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1995) Land surface in gravity points classification by a complete system of curvatures. Mathematical Geology 27: 373–390.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

minimal_curvature, maximal_curvature, mean_curvature, gaussian_curvature, profile_curvature, tangential_curvature

vertical_excess_curvature

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool calculates the vertical excess curvature from a digital elevation model (DEM). Vertical excess curvature is the difference of profile (vertical) and minimal curvatures at a location (Shary, 1995). This variable has positive values, zero or greater. Florinsky (2017) states that vertical excess curvature measures the extent to which the bending of a normal section having a common tangent line with a slope line is larger than the minimal bending at a given point of the surface. Vertical excess curvature is measured in units of m-1.

The user must specify the name of the input DEM (dem) and the output raster (output). The The Z conversion factor (zfactor) is only important when the vertical and horizontal units are not the same in the DEM. When this is the case, the algorithm will multiply each elevation in the DEM by the Z Conversion Factor. Curvature values are often very small and as such the user may opt to log-transform the output raster (log). Transforming the values applies the equation by Shary et al. (2002):

Θ' = sign(Θ) ln(1 + 10n|Θ|)

where Θ is the parameter value and n is dependent on the grid cell size.

For DEMs in projected coordinate systems, the tool uses the 3rd-order bivariate Taylor polynomial method described by Florinsky (2016). Based on a polynomial fit of the elevations within the 5x5 neighbourhood surrounding each cell, this method is considered more robust against outlier elevations (noise) than other methods. For DEMs in geographic coordinate systems (i.e. angular units), the tool uses the 3x3 polynomial fitting method for equal angle grids also described by Florinsky (2016).

References

Florinsky, I. (2016). Digital terrain analysis in soil science and geology. Academic Press.

Florinsky, I. V. (2017). An illustrated introduction to general geomorphometry. Progress in Physical Geography, 41(6), 723-752.

Shary PA (1995) Land surface in gravity points classification by a complete system of curvatures. Mathematical Geology 27: 373–390.

Shary P. A., Sharaya L. S. and Mitusov A. V. (2002) Fundamental quantitative methods of land surface analysis. Geoderma 107: 1–32.

See Also

tangential_curvature, profile_curvature, minimal_curvature, maximal_curvature, mean_curvature, gaussian_curvature

yield_filter

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to filter the crop yield values associated with point data derived from commerical combine harvester yield monitors. Crop yield data often suffer from high levels of noise do to the nature of how these data are collected. Commercial crop yield monitors on combine haresters are prone to erroneous data for several reasons. Where harvested rows overlap, lower than expected crop yields may be associated with the second overlapping swath because the head of the harvesting equipment is only partially filled. The edges of fields are particularly susceptible to being harvested without a full swath of crop, resulting in anomalous crop yields. The starts of new swaths are also prone to errors, because of the misalignment between the time when the monitor begins recording and the time when grain begins flowing. Sudden changes in harvester speed, either speeing up or slowing down, can also result in anomalous yield measurements.

The yield_filter tool can smooth yield point patterns, particularly accounting for differences among adjacent swath lines. The user must specify the name of the input points shapefile (input), the name of the yield attribute (yield_field), the pass number attribute (pass_field_name), the output file (output), the swatch width (combine head length, width), the threshold value (z_score_threshold), and optionally, minimum and maximum yield values (min_yield and max_yield). If the input vector does not contain a field indicating a unique identifier associated with each swath pass for points, users may use the recreate_pass_lines to estimate swath line structures within the yield points. The threshold value, measured in standardized z-scores is used by the tool to determine when a point is replaced by the mean value of nearby points in adjacent swaths. The output vector will contain the smoothed yield data in the attribute table in a field named AVGYIELD.

The following images show before and after examples of applying yield_filter:

For a video tutorial on how to use the recreate_pass_lines, yield_filter and yield_map tools, see this YouTube video. There is also a blog that describes the usage of this tool on the WhiteboxTools homepage.

See Also

recreate_pass_lines, yield_map, reconcile_multiple_headers, remove_field_edge_points, yield_normalization

yield_map

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to create a segmented-vector polygon yield map from a set of harvester points. The user must specify the name of the input points shapefile (input), the pass number attribute (passFieldName), the output file (output), the swatch width (combine head length, width), and maximum angular change in direction (maxChangeInHeading). If the input vector does not contain a field indicating a unique identifier associated with each swath pass for points, users may use the recreate_pass_lines to estimate swath line structures within the yield points.

For a video tutorial on how to use the recreate_pass_lines, yield_filter and yield_map tools, see this YouTube video. There is also a blog that describes the usage of this tool on the WhiteboxTools homepage.

See Also

recreate_pass_lines, yield_filter, reconcile_multiple_headers, remove_field_edge_points, yield_normalization

yield_normalization

License Information

Use of this function requires a license for Whitebox Workflows for Python Professional (WbW-Pro). Please visit www.whiteboxgeo.com to purchase a license.

Description

This tool can be used to normalize the crop yield values (yield_field) in a coverage of vector points (input) derived from a combine harvester for a single agricultural field. Normalization is the process of modifying the numerical range of a set of values. Normalizing crop yield values is a common pre-processing procedure prior to analyzing crop data in either a statistical model or machine learning based analysis. The tool re-scales the crop yield values to a 0.0-1.0 range based on the minimum and maximum values, storing the rescaled yield data in an attribute field (named NORM_YIELD)) in the output vector file (output). The user may also specify custom minimum and maximum yield values (min_yield and max_yield); any crop yield values less than this minimum or larger than the specified maximum will be assigned the boundary values, and will subsequently define the 0.0-1.0 range.

The user may also optionally choose to standardize (standardize), rather than normalize the data. See here for a detailed description of the difference between these two data re-scaling methods. With this option, the output yield values (stored in the STD_YIELD field of the output vector attribute table) will be z-scores, based on differences from the mean and scaled by the standard deviation.

Lastly, the user may optionally specify a search radius (radius), in meters. Without this optional parameter, the normalization of the data will be based on field-scale values (min/max, or mean/std. dev.). However, when a radius value larger than zero is specified, the tool will perform a regional analysis based on the points contained within a local neighbourhood. The radius value should be large enough to ensure that at least three point measurements are contain within the neighbourhood surrounding each point. Warnings will be issued for points for which this condition is not met, and their output values will be set to -99.0. When this warning occurs frequently, you should consider choosing a larger search radius. The following images demonstrate the difference between field-scale and localized normalization of a sample yield data set.

Like many other tools in the Precision Agriculture toolbox, this tool will work with input vector points files in geographic coordinates (i.e. lat/long), although it is preferable to use a projected coordinate system.

See Also

yield_map, yield_filter, recreate_pass_lines, reconcile_multiple_headers, remove_field_edge_points

Raster class documentation

Each of the following functions are methods of Raster class.

  1. __abs__
  2. __add__
  3. __eq__
  4. __floordiv__
  5. __ge__
  6. __getitem__
  7. __getstate__
  8. __gt__
  9. __iadd__
  10. __idiv__
  11. __imul__
  12. __init__
  13. __isub__
  14. __le__
  15. __lt__
  16. __mod__
  17. __mul__
  18. __ne__
  19. __neg__
  20. __pow__
  21. __setattr__
  22. __setitem__
  23. __sub__
  24. __truediv__
  25. acos
  26. acosh
  27. asin
  28. asinh
  29. atan
  30. atan2
  31. atanh
  32. calculate_clip_values
  33. calculate_mean
  34. calculate_mean_and_stdev
  35. ceil
  36. con
  37. configs
  38. cos
  39. cosh
  40. decrement
  41. decrement_row_data
  42. deep_copy
  43. exp
  44. exp2
  45. file_mode
  46. file_name
  47. floor
  48. get_column_from_x
  49. get_data_size_in_bytes
  50. get_row_data
  51. get_row_from_y
  52. get_value
  53. get_value_as_hsi
  54. get_value_as_rgba
  55. get_x_from_column
  56. get_y_from_row
  57. increment
  58. increment_row_data
  59. is_nodata
  60. ln
  61. log10
  62. log2
  63. max
  64. min
  65. new_from_other
  66. normalize
  67. num_cells
  68. num_valid_cells
  69. raster_type
  70. reinitialize_values
  71. set_data_from_raster
  72. set_row_data
  73. set_value
  74. set_value_from_rgba
  75. signum
  76. sin
  77. sinh
  78. size_of
  79. sqrt
  80. square
  81. tan
  82. tanh
  83. to_degrees
  84. to_radians
  85. trunc
  86. update_display_min_max
  87. update_min_max

__abs__

abs(self)

Function Signature

def __abs__(self) -> Raster: ...

__add__

Return self+value.

Function Signature

def __add__(self, other: Union[Raster, float]) -> Raster: ...

__eq__

Return self==value.

Function Signature

def __eq__(self, other: Union[Raster, float]) -> Raster: ...

__floordiv__

Return self//value.

Function Signature

def __floordiv__(self, other: Union[Raster, float]) -> Raster: ...

__ge__

Return self>=value.

Function Signature

def __ge__(self, other: Union[Raster, float]) -> Raster: ...

__getitem__

Return self[key].

Function Signature

def __getitem__(self, index: int) -> VectorGeometry: ...

__getstate__

Helper for pickle.

__gt__

Return self>value.

Function Signature

def __gt__(self, other: Union[Raster, float]) -> Raster: ...

__iadd__

Return self+=value.

Function Signature

def __iadd__(self, other: Union[Raster, float]) -> None: ...

__idiv__

No documentation found.

Function Signature

def __idiv__(self, other: Union[Raster, float]) -> None: ...

__imul__

Return self*=value.

Function Signature

def __imul__(self, other: Union[Raster, float]) -> None: ...

__init__

Initialize self. See help(type(self)) for accurate signature.

Function Signature

def __init__(self):

__isub__

Return self-=value.

Function Signature

def __isub__(self, other: Union[Raster, float]) -> None: ...

__le__

Return self<=value.

Function Signature

def __le__(self, other: Union[Raster, float]) -> Raster: ...

__lt__

Return self<value.

Function Signature

def __lt__(self, other: Union[Raster, float]) -> Raster: ...

__mod__

Return self%value.

Function Signature

def __mod__(self, other: Union[Raster, float]) -> Raster: ...

__mul__

Return self*value.

Function Signature

def __mul__(self, other: Union[Raster, float]) -> Raster: ...

__ne__

Return self!=value.

Function Signature

def __ne__(self, other: Union[Raster, float]) -> Raster: ...

__neg__

-self

Function Signature

def __neg__(self) -> Raster: ...

__pow__

Return pow(self, value, mod).

Function Signature

def __pow__(self, other: Union[Raster, float], modulo: Optional[float] = None) -> Raster: ...

__setattr__

Implement setattr(self, name, value).

__setitem__

Set self[key] to value.

Function Signature

def __setitem__(self, row_column: Tuple[int, int], value: float) -> None: ...

__sub__

Return self-value.

Function Signature

def __sub__(self, other: Union[Raster, float]) -> Raster: ...

__truediv__

Return self/value.

Function Signature

def __truediv__(self, other: Union[Raster, float]) -> Raster: ...

acos

No documentation found.

Function Signature

def acos(self) -> Raster: ...

acosh

No documentation found.

Function Signature

def acosh(self) -> Raster: ...

asin

No documentation found.

Function Signature

def asin(self) -> Raster: ...

asinh

No documentation found.

Function Signature

def asinh(self) -> Raster: ...

atan

No documentation found.

Function Signature

def atan(self) -> Raster: ...

atan2

No documentation found.

Function Signature

def atan2(self, other: Union[Raster, float]) -> Raster: ...

atanh

No documentation found.

Function Signature

def atanh(self) -> Raster: ...

calculate_clip_values

No documentation found.

Function Signature

def calculate_clip_values(self, percent: float) -> Tuple[float, float]: ...

calculate_mean

No documentation found.

Function Signature

def calculate_mean(self) -> float: ...

calculate_mean_and_stdev

No documentation found.

Function Signature

def calculate_mean_and_stdev(self) -> Tuple[float, float]: ...

ceil

No documentation found.

Function Signature

def ceil(self) -> Raster: ...

con

The ConditionalEvaluation tool can be used to perform an if-then-else style conditional evaluation on a raster image on a cell-to-cell basis. The user specifies a conditional statement. The grid cell values in the output image will be determined by the TRUE and FALSE values and conditional statement. The conditional statement is a logical expression that must evaluate to a Boolean, i.e. TRUE or FALSE. Then depending on how this statement evaluates for each grid cell, the TRUE or FALSE values will be assigned to the corresponding
grid cells of the output raster. The TRUE or FALSE values may take the form of either a constant numerical value, an existing raster image (which may be the same image as the input), or any of the strings 'null', 'nodata', or 'value'.

The conditional statement is a single-line logical condition. In addition to the common comparison and logical
operators, i.e. < > <= >= == (EQUAL TO) != (NOT EQUAL TO) || (OR) && (AND) (Note: or, OR, and and AND are also valid operators), conditional statements may contain a number of valid mathematical functions. For example:

 * log(base=10, val) -- Logarithm with optional 'base' as first argument.
 If not provided, 'base' defaults to '10'.
 Example: log(100) + log(e(), 100)

 * e()  -- Euler's number (2.718281828459045)
 * pi() -- π (3.141592653589793)

 * int(val)
 * ceil(val)
 * floor(val)
 * round(modulus=1, val) -- Round with optional 'modulus' as first argument.
     Example: round(1.23456) == 1 && round(0.001, 1.23456) == 1.235

 * abs(val)
 * sign(val)

 * min(val, ...) -- Example: min(1, -2, 3, -4) == -4
 * max(val, ...) -- Example: max(1, -2, 3, -4) == 3

 * sin(radians)    * asin(val)
 * cos(radians)    * acos(val)
 * tan(radians)    * atan(val)
 * sinh(val)       * asinh(val)
 * cosh(val)       * acosh(val)
 * tanh(val)       * atanh(val)

Notice that the constants Pi and e must be specified as functions, pi() and e(). A number of global variables are also available to build conditional statements. These include the following:

Special Variable Names For Use In Conditional Statements:

NameDescription
valueThe grid cell value.
nodataThe input raster's NoData value.
nullSame as nodata.
minvalueThe input raster's minimum value.
maxvalueThe input raster's maximum value.
rowsThe input raster's number of rows.
columnsThe input raster's number of columns.
rowThe grid cell's row number.
columnThe grid cell's column number.
rowyThe row's y-coordinate.
columnxThe column's x-coordinate.
northThe input raster's northern coordinate.
southThe input raster's southern coordinate.
eastThe input raster's eastern coordinate.
westThe input raster's western coordinate.
cellsizexThe input raster's grid resolution in the x-direction.
cellsizeyThe input raster's grid resolution in the y-direction.
cellsizeThe input raster's average grid resolution.

The special variable names are case-sensitive. Each of the special variable names can also be used as valid TRUE or FALSE constant values.

The following are examples of valid conditional statements:

value != 300.0

row > (rows / 2)

value >= (minvalue + 35.0)

(value >= 25.0) && (value <= 75.0)

tan(value * pi() / 180.0) > 1.0

value == nodata

Any grid cell in the input raster containing the NoData value will be assigned NoData in the output raster, unless a NoData grid cell value allows the conditional statement to evaluate to True (i.e. the conditional statement includes the NoData value), in which case the True value will be assigned to the output.

Function Signature

def con(self, con_statement: str, true_raster_or_float: Union[Raster, float, str], false_raster_or_float: Union[Raster, float, str]) -> Raster: ...

configs

Function Signature

def configs(self) -> RasterConfigs: ...

cos

No documentation found.

Function Signature

def cos(self) -> Raster: ...

cosh

No documentation found.

Function Signature

def cosh(self) -> Raster: ...

decrement

No documentation found.

Function Signature

def decrement(self, row: int, column: int, value: float) -> None: ...

decrement_row_data

No documentation found.

Function Signature

def decrement_row_data(self, row: int, values: List[float]) -> None: ...

deep_copy

Makes a deep copy of a Raster, returning the new Raster object.

Function Signature

def deep_copy(self) -> Raster: ...

exp

No documentation found.

Function Signature

def exp(self) -> Raster: ...

exp2

No documentation found.

Function Signature

def exp2(self) -> Raster: ...

file_mode

Function Signature

def file_mode(self) -> str: ...

file_name

Function Signature

def file_name(self, value: str) -> None: ...

floor

No documentation found.

Function Signature

def floor(self) -> Raster: ...

get_column_from_x

No documentation found.

Function Signature

def get_column_from_x(self, x: float) -> int: ...

get_data_size_in_bytes

Returns the size of the pixel data in bytes.

Function Signature

def get_data_size_in_bytes(self) -> int: ...

get_row_data

No documentation found.

Function Signature

def get_row_data(self, row: int) -> List[float]: ...

get_row_from_y

No documentation found.

Function Signature

def get_row_from_y(self, y: float) -> int: ...

get_value

Returns the value contained within a grid cell specified by row and column.

Function Signature

def get_value(self, row: int, column: int) -> float: ...

get_value_as_hsi

Returns the hue, saturation, and intensity equivalent of a grid cell. This assumes that the grid cell contains red, green, blue data, i.e. that the DataType is RGB.

Function Signature

def get_value_as_hsi(self, row: int, column: int) -> Tuple[float, float, float]: ...

get_value_as_rgba

Returns the red, green, blue, and opacity values from a grid cell, assuming that the cell contains colour data, i.e. that the DataType is RGB.

Function Signature

def get_value_as_rgba(self, row: int, column: int) -> Tuple[int, int, int, int]: ...

get_x_from_column

No documentation found.

Function Signature

def get_x_from_column(self, column: int) -> float: ...

get_y_from_row

No documentation found.

Function Signature

def get_y_from_row(self, row: int) -> float: ...

increment

No documentation found.

Function Signature

def increment(self, row: int, column: int, value: float) -> None: ...

increment_row_data

No documentation found.

Function Signature

def increment_row_data(self, row: int, values: List[float]) -> None: ...

is_nodata

No documentation found.

Function Signature

def is_nodata(self) -> Raster: ...

ln

No documentation found.

Function Signature

def ln(self) -> Raster: ...

log10

No documentation found.

Function Signature

def log10(self) -> Raster: ...

log2

No documentation found.

Function Signature

def log2(self) -> Raster: ...

max

No documentation found.

Function Signature

def max(self, other: Union[Raster, float]) -> Raster: ...

min

No documentation found.

Function Signature

def min(self, other: Union[Raster, float]) -> Raster: ...

new_from_other

Creates a new in-memory Raster object with grid extent and location based on an existing Raster contained within file_name.

Function Signature

def new_from_other(other: Raster, data_type: Optional[RasterDataType]) -> Raster: ...

normalize

No documentation found.

Function Signature

def normalize(self) -> Raster: ...

num_cells

No documentation found.

Function Signature

def num_cells(self) -> int: ...

num_valid_cells

No documentation found.

Function Signature

def num_valid_cells(self) -> int: ...

raster_type

Function Signature

def raster_type(self) -> RasterType: ...

reinitialize_values

No documentation found.

Function Signature

def reinitialize_values(self, value: float) -> None: ...

set_data_from_raster

No documentation found.

Function Signature

def set_data_from_raster(self, other: Raster) -> Optional[str]: ...

set_row_data

No documentation found.

Function Signature

def set_row_data(self, row: int, values: List[float]) -> None: ...

set_value

No documentation found.

Function Signature

def set_value(self, row: int, column: int, value: float) -> None: ...

set_value_from_rgba

No documentation found.

Function Signature

def set_value_from_rgba(self, row: int, column: int, rgba: Tuple[int, int, int, int]) -> None: ...

signum

Returns a raster that where each cell is assigned a number that represents the sign of the corresponding grid cell in the source raster. The transformation follows the rules below:

  • 1.0 if the number is positive, +0.0 or INFINITY
  • -1.0 if the number is negative, -0.0 or NEG_INFINITY
  • NaN if the number is NaN

Function Signature

def signum(self) -> Raster: ...

sin

No documentation found.

Function Signature

def sin(self) -> Raster: ...

sinh

No documentation found.

Function Signature

def sinh(self) -> Raster: ...

size_of

No documentation found.

Function Signature

def size_of(self) -> int: ...

sqrt

No documentation found.

Function Signature

def sqrt(self) -> Raster: ...

square

No documentation found.

Function Signature

def square(self) -> Raster: ...

tan

No documentation found.

Function Signature

def tan(self) -> Raster: ...

tanh

No documentation found.

Function Signature

def tanh(self) -> Raster: ...

to_degrees

No documentation found.

Function Signature

def to_degrees(self) -> Raster: ...

to_radians

No documentation found.

Function Signature

def to_radians(self) -> Raster: ...

trunc

No documentation found.

Function Signature

def trunc(self) -> Raster: ...

update_display_min_max

No documentation found.

update_min_max

No documentation found.

Function Signature

def update_min_max(self) -> None: ...