Africa, Americas, Prosperity, Europe, Planet, Asia Pacific, Climate Change

10 ways conservation tech shifted into auto in 2018

Sue Palminteri | Jan 03, 2019

SHARE

This article was originally published by Mongabay and is republished with permission.

  • Conservation scientists are increasingly automating their research and monitoring work, to make their analyses faster and more consistent; moreover, machine learning algorithms and neural networks constantly improve as they process additional information.
  • Pattern recognition detects species by their appearance or calls; quantifies changes in vegetation from satellite images; tracks movements by fishing ships on the high seas.
  • Automating even part of the analysis process, such as eliminating images with no animals, substantially reduces processing time and cost.
  • Automated recognition of target objects requires a reference database: the species and objects used to create the algorithm determine the universe of species and objects the system will then be able to identify.

You can now speak into your mobile phone and have it produce written text. Then have Google translate your words into Japanese, or Hindi. In two seconds. These examples of artificial intelligence automate tasks that used to require experts and numerous hours to complete.

Our 2018 review of tech stories highlights how automation is transforming how we study and monitor nature, aggregating and analyzing multiple data sources, and communicating the results as management and conservation recommendations. Artificial intelligence (AI), including various types of pattern recognition algorithms and neural networks, is changing how the research and conservation communities do their work. We highlight just some examples below.

1. Image pattern recognition for detecting species

Pattern recognition includes the automated detection of target objects in photos – taken by people, drones, satellites, or cameras traps – as well as in sound files collected with acoustic monitoring devices. As image datasets grow, the availability of pattern recognition algorithms becomes more critical.

For example, one study showed that deep neural networks, a type of machine learning that recognizes numerical patterns in stages, can review and categorize image data far faster than humans can, and nearly as accurately.  These networks automatically counted, identified, and described the behaviors of 48 animal species in the 3.2-million image Snapshot Serengeti camera trap dataset, which would translate to saving more than 17,000 hours, or nearly 8.5 years of human labeling effort.

A large leopard crosses between a pair of camera traps at sunset. The Karnataka study surveyed leopards in a variety of vegetation and land use types. Image copyright of Nature Conservation Foundation.

In India, an AI-based study identified 363 different leopards from 1.5 million images over a six-year statewide study, based on the unique rosette patterns on their bodies. The trained recognition algorithm first removed “noise,” images without animals, and then identified from the remaining hundreds of thousands of images those with the targeted species, faster than humans could.

Volunteers, such as those using wpsWatch to scour camera trap photos for poacher presence, and citizen scientists using iNaturalist or eBird species databases to document sightings research projects by reviewing project images continue to help conservation projects, but they are increasingly being assisted by new machine learning tools. For example, iNaturalist uses computer vision to help its app users identify local species by offering them several suggestions based on what is in the photo. Wildbook uses computer vision to determine, based on an animal’s unique markings, if it is a new individual or an animal already in its database.

The United Nations Food and Agriculture Organization (FAO) developed an AI-based app, called Nuru, which helps farmers photograph and identify damage to their plantsfrom fall armyworm and share that information with authorities who can map the distribution and spread of the pest.

Farmers in Busia, Kenya using the Nuru app that automates detection of fall armyworm damage to maize or cassava plants. Image by Harun Murithi, courtesy of the International Institute of Tropical Agriculture (IITA).

Automated recognition of humans in infrared images, combined with near-real-time transmission can turn a remotely positioned camera into a tool for species protection. Ranger teams can quickly respond to potential poachers detected moving along trails. Challenges facing such recognition algorithms include distinguishing poachers carrying meat on their shoulders who are not recognizable as human figures.

2. Detecting species at night, with help from astronomy

Much of wildlife movement, as well as illegal human activity, in nature occurs at night, when taking optical images would require a bright flash. The use of heat-detection software and associated machine-learning algorithms used by astronomers to find stars lend themselves well to automatically detecting animals and humans in thermal imagery on the ground.

Eland in South Africa shown in this thermal-infrared image as warm objects compared to cooler water. Their horns and legs are cooler than their heads and necks. Heat energy can be reflected and absorbed, just like light energy, so you can see the heat from the animals reflected off the water’s surface as they approach it to drink. Image courtesy of: Liverpool John Moores University in partnership with the Endangered Wildlife Trust.

One collaboration between astrophysicist and biologist neighbors produced a system that detected warm, living objects from drone-derived thermal video footage. Once they first trained the machine learning algorithm to recognize people and animals, instead of galaxies, they found that animal species have unique thermal fingerprints, which the algorithm could automatically distinguish more efficiently than humans analyzing the infrared footage visually.

3. Interpreting aerial imagery and combining aerial data types

As satellites, planes, and now drones carry an array of sensors, automated processing of these data has become standard practice. Image interpretation involves training software to automatically distinguish reflectance patterns.  For example, scientists in California used ground-truthed aerial surveys to develop predictive models to identify potential wading bird habitat.

A gathering of marbled godwits, dowitchers, willets, and other shorebirds at Arrowhead Marsh, Oakland, California. Satellite imagery, combined with avid birdwatchers and artificial intelligence, predicted bird habitat to inform a habitat restoration program. Image by Ingrid Taylar, CC 2.0.

Automated image processing already enables practitioners to quantify changes in vegetation cover over time as a basis for online forest monitoring platforms such as Global Forest Watch. These systems will likely play an increasing role in detecting and interpreting forest loss and allowing the identification of major deforestation drivers.  A team of researchers recently developed automated algorithms to predict timber extraction from commercial LiDAR data.

Amazon deforestation in southwestern Brazil follows the building of roads, as shown in this satellite image. Image courtesy of NASA Goddard Space Flight Center.

Automated combining of data from various sensors, data fusion, allows new types of analyses of the extent and health of vegetation and plant species distributions. Radar works day and night and can determine the structure of objects even under cloud cover; radar data added to Brazil’s monthly Amazon deforestation monitoring, suggested that prior deforestation rates were likely underestimates. By combining radar with optical imagery, Kenya’s forest agency has recently improved its mapping of coastal and mangrove forests, that are frequently under persistent cloud cover.

Researchers in Australia have built AI algorithms to recognize the unique hyperspectral signatures of underwater objects, such as sand, algae, and corals. The scientists can use drone-mounted cameras to accurately survey reefs and assess the composition and health of coral reefs under the water over time without having to take a dive.

Queensland University of Technology’s unmanned aerial vehicle-mounted hyperspectral camera in action over the Ningaloo Reef in Australia. As artificial intelligence software for identifying spectral signatures improves, the analysis of hyperspectral data has the potential to become largely automated. Image credit: Felipe Gonzalez.

Concurrently, the Reefscape project is compiling a reference database using imaging spectroscopy, which measures the spectral patterns of sunlight scattered and absorbed by corals or other objects. These patterns differ based on an object’s unique chemical makeup, and so can be incorporated into automated algorithms to remotely assess changes in reef health over time.

4. Automating drone-based wildlife surveys

The Great Elephant Census, which was completed by observers in small planes flying over great expanses of Africa may be the last study of its kind.  Now, researchers are moving to drones and automated identification to monitor terrestrial (and even marine) animals more easily, cheaply, and safely. They are also using automated machine learning algorithms to make sense of the flood of image and video data drone-based studies generate.

A doctoral study that creatively combined drones and computer-vision algorithms with rubber ducks to demonstrate that aerial imagery to count seabirds could be more accurate compared to traditional ground survey methods.

A large replica tern colony seen by the UAV camera. The researchers set up 10 colonies with between 463 and 1,017 birds in each; counts by ground counters were compared to those by volunteers counting birds in the UAV-derived photos. Image copyright of Jarrod Hodgson.

Researchers in Namibia used convolutional neural networks, a type of artificial intelligence, to survey savanna wildlife by analyzing thousands of images, captured during drone flights.

5. Decoding bat language and other sounds

Passive acoustic monitoring devices, can listen 24/7 for sounds in nature and produce sound recordings that trained algorithms can decipher.  As these devices become more effective at detecting sounds of humans or other target species, more researchers are using them to detect the presence and behavior of animals that vocalize, such as birds and bats.

Brazilian free-tailed bats (a.k.a. Mexican free-tailed bats) roost in large numbers in caves at a few sites, making it vulnerable to population decline. Bats emit ultrasonic pulses that automated algorithms can decipher given sufficient reference sound data. Image by Nick Hristov, National Park Service.

“Windmill farms are a menace for birds…and bats,” Uruguayan researcher Germán Botto told Mongabay. Botto and colleagues designed a machine-learning algorithm that predicts bat species present near wind farms based on recordings of their ultrasound pulses, to inform wind farm environmental impact studies. Automated analysis of the soundwave signals allowed the team to characterize diversity, abundance and habits, even when they could not see the animals.

Botto and colleagues had to create their own bat sound reference library because local bat dialects differed from existing recordings. Collecting sounds to build the database requires a lot of nocturnal legwork and identification. Once the database is in place, though, the algorithm can identify the callers increasingly faster and more reliably. The scientists are collaborating with wind farm companies to help expand the reference library and improve the algorithm’s accuracy and speed.

6. Species recognition from movement patterns

Pattern recognition can even help ensure you have a great day at the beach.

An AI-integrated float system called Clever Buoy combines sonar to detect a large object in the water, artificial intelligence to determine that the object is a shark close enough to threaten beach-goers, and automated SMS alerts to lifeguards that enable them to take action.

The virtual net created by two Clever Buoy automated shark detection and alert units each connected to three sonar transducers at City Beach, near Perth, Australia. Image courtesy of Smart Marine Systems.

The AI on the solar-powered device automates the recognition of sharks by their size, speed and signature swim patterns, which produce an acoustic signature different from those of smaller sharks and other marine animals.

7. Filling in the gaps: Managing endangered species on the high seas

Researchers can already monitor animal movements through automating the collection of animal location data. Programming tracking tags to collect data on a regular, predetermined schedule, automates the job of physically going out and locating the animal.

Programming tracking tags to receive GPS transmissions on a planned “duty” cycle that turns the tag on and off saves battery life and allows multi-year tracking of marine animals. GPS doesn’t work below the ocean’s surface, however, creating gaps in data transmission when an animal dives for a long time.

Male elephant seal C49 with two tags on its head. The satellite tag allows researchers to track the animal globally. The smaller radio transmitter tag allows the instrument to be recovered when the animal is on the beach. The satellite tag transmits when the animal surfaces. With their deep dives and rapid surface intervals, elephants seals are difficult to locate visually. Image by Daniel Costa, University of California, Santa Cruz. CC BY-NC-SA 2.0.

“Northern elephant seals may dive for 20 minutes and then surface just a minute or two,” said researcher Autumn-Lynn Harrison. “So unlike whales, that you can see and report to a website, when you’re in the high seas, you may never see an elephant seal. It’s just their brown head that pops up against a vast ocean.” Marine researchers have applied modeling techniques to fill in the information gaps about how marine animals move through the oceans.

8 Technological breakthroughs changing how researchers observe the world’s fishing fleet

On the vast ocean, most of what fishing vessels do goes unrecorded, and catch data reported by ship captains are incomplete and potentially biased.

Now, more and more fishing vessels are equipped with a device that regularly sends out an automatic identification system, or AIS, signal. The signal carries information about the vessel, including its name, country of origin, speed and position.

Hundreds of fishing vessels (blue lights) plying the waters between Taiwan (center foreground) and China are seen from a satellite orbiting the earth. Seeing lights where no Automated Identification System (AIS) signal is transmitting could suggest a vessel acting illegally.  Groups monitoring commercial fishing have combined the AIS signal data with other information, such as high-resolution satellite images, which show them which vessels are broadcasting AIS and which are not. Image courtesy of NASA Johnson/Flickr.

A growing number of satellites gather these signals, and together they have provided authorities and researchers monitoring the global fishing industry with a fairly complete picture of where ships are and where they’ve been, at any given time, encapsulated in billions of individual AIS signals.

The Global Fishing Watch collaboration has created algorithms to process the masses of AIS data to show a ship’s fishing activity and travel path. Advances in AI have enabled the researchers to see patterns in this flood of data, such as fishing intensity, fishing within protected areas, or transshipment events.

9. Building of structures through 3D printing

Programming the automated construction of artificial coral structures could assist reef restoration efforts by offering dispersing coral larvae an unexposed place to develop, buffered from predators and water currents. The structure would have predesigned contours and shapes that mimic natural reefs, so that larvae can easily attach themselves and foster new coral growth.

With conservation’s frequent need for customized equipment, programmable 3D printing could increasingly allow teams to build custom toolkits, such as that of a portable mosquito identification toolbox to help monitor the spread of disease-carrying mosquito species.

10. Harvesting volunteer interest and effort

Citizen science platforms rely on the curiosity and legwork of dedicated volunteers to provide photos or information to public online platforms that compile huge amounts of data on species locations. Crowdsourced image identification platforms, such as iNaturalist and eBird, have begun to automate the processing of thousands of these citizen science records, which has speeded identification of more common species.

These machine learning methods rely on large photo databases and are only as good as the data they have seen before. “The model only works as well as it has been trained,” iNaturalist lead Scott Loarie told Mongabay. Models also lack an expert’s sense of context and gut feeling.

Fiddler crabs congregating. Image by Andrea Westmoreland via Wikimedia Commons, CC 2.0.

An observation earlier this year by an amateur naturalist of a fiddler crab species far outside its known range challenged iNaturalist’s computer vision and human expertise in mapping species distributions. Expert input via iNaturalist correctly identified the specimen after the platform’s computer vision algorithms did not acknowledge the species outside its documented range, showing how crowdsourced observations can help improve automated models.

Prerequisites for applying automated methods

Automated recognition of species via sound or sight relies on a reference database to which new data are compared: the species and objects used to build the algorithm determine which species and objects the system will be able to identify.

Automation often requires large amounts of data collected and labeled by humans to train algorithms to recognize objects or patterns of interest. For example, more than 50,000 Snapshot Serengeti volunteers reviewed millions of images and recorded the number, species, and behavior of the animals in them.

A 2018 study found that deep neural networks (DNNs), a cutting-edge form of artificial intelligence, can successfully identify, count, and describe animals in camera-trap images. DNN model output is often interpreted as a probability that the image belongs to a certain class. The three plots below the image, from left to right, show the neural network’s prediction for the species, number, and behavior of the animals in the image. The horizontal color bars indicate how confident the neural network is about its predictions. Image by Norouzzadeh et al. 2018. PNAS.

Some studies require specialists to program acoustic devices or pilot a drone to collect data, build algorithms for project-specific pattern recognition, and integrate the models into analysis software. Independent human input and expertise are also still needed to avoid circular data input, such as building algorithms using photo verification or other human input that is itself based on AI.

Finally, automated processing of millions of images or billions of AIS vessel location signals requires large amounts of computing power that is not universally available, though cloud computing may help reduce this hurdle.

SUBSCRIBE FOR UPDATES

Thank you for subscribing!

Your subscription has now been confirmed. We look forward to keeping you up to date on the latest news around sustainable development in your chosen fields.

NESTLE

Your subscription has now been confirmed. We look forward to keeping you up to date on the latest news around sustainable development in your chosen fields.

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere
Stay connected. Be informed. Join the conversation

Sign up for our Weekly Newsletter and receive updates on all the latest sustainable development news, tools and insights straight to your inbox.