Login   |      Register
English    中文


MIT and Penn State join forces with AWS to help classify images of flood disaster zones

2020-12-29  |   Editor : houxue2018  
Category : News

Abstract

Researchers at MIT Lincoln Laboratory and students at the Penn State College of Information Sciences and Technology have been working on artificial intelligence computer models that uses disaster scene images to inform responders about flooding.

Content

Researchers at MIT Lincoln Laboratory and students at the Penn State College of Information Sciences and Technology have been working on artificial intelligence computer models that uses disaster scene images to inform responders about flooding.

For humans, this process is relatively easy, but when a dataset is made up of more than 100,000 areal images that vary in altitude, cloud cover, context and area and need to be processed in a matter of days or hours, computers become a necessity. That’s when researchers turned to Amazon Web Services Inc.to use their cloud services.

Students at Penn State began with a project that analyzed imagery from the Low Altitude Disaster Imagery dataset, a collection of aerial images taken above disaster scenes since 2015 to train the computer vision algorithm.

AWS does most of the heavy lifting by providing the compute resources to have computer vision algorithms train systems to understand the difference between lakes – which are clearly not flood zones – and actual flooding. In this manner, when a disaster happens, the machine learning algorithm is fed areal images it can quickly feed flood zones to rapid responders so that they can look over photos to see where they may be needed.

Making a guess about the difference between an image being a flood zone or not could be as easy as asking, “Is there a clear shoreline with discernible sand” or “Are there visible trees sticking out of water?”

Although that might seem easy for humans, it’s not that easy for computers. For example, in 2019 a leading computer vision benchmark mislabeled a flooded region as a “toilet” and a highway surrounded by flooding as a “runway.” When the computer is less confident about a label, the solution is to add a human.

Augmenting AI with human intelligence

Thus, the machine learning and LADI dataset portion of the project is only half of the puzzle. The other part is humans from Amazon’s Mechanical Turk who come into play when the machine learning algorithm is not confident about an image being a flood zone.

MTurk, as it’s often called for short, is a crowdsourcing marketplace where individuals and businesses outsource tasks to a virtual workforce – in this case, image classification. In this manner, MTurk workers review and label images to shore up any gaps in the algorithm adding a human element.

“We met with the MIT Lincoln Laboratory team in June 2019 and recognized shared goals around improving annotation models for satellite and LADI objects, as we’ve been developing similar computer vision solutions here at AWS,” said Kumar Chellapilla, general manager of Human-in-the-Loop Machine Learning Services at AWS. “We connected the team with the AWS Machine Learning Research Awards, now part of the Amazon Research Awards program, and the AWS Open Data Program and funded MTurk credits for the development of MIT Lincoln Laboratory’s ground truth dataset.”

According to Penn State, this work has led to a trained model with an expected accuracy of 79%. The students’ code and models are now being integrated into the LADI project as an open-source baseline classifier and tutorial.

“During a disaster, a lot of data can be collected very quickly,” said Andrew Weinert, a staff research associate at Lincoln Laboratory who helped facilitate the project with the College of IST. “But collecting data and actually putting information together for decision-makers is a very different thing.”

Amazon also supported the development of a user interface for use by urban search and rescue teams, enabled by MIT Lincoln Laboratory to pilot real-time Civilian Air Patrol image annotation during Hurricane Dorian.

And during this fall, the same MIT team will build a pipeline to CAP data using Amazon Augmented AI, or A2I, to route low-confidence results to MTurk for human review.

“A2I is like ‘phone a friend’ for the model,” said Weinert. “It helps us route the images that can’t confidently be labeled by the classifier to MTurk Workers for review. Ultimately, developing the tools that can be used by first responders to get help to those that need it.”

Sources:

SiliconANGLE News

https://siliconangle.com/2020/12/17/mit-penn-state-join-forces-aws-help-classify-images-flood-disaster-zones/

Provided by the IKCEST Disaster Risk Reduction Knowledge Service System

   Sign in for comments!

Comment list ( 0 )

 



Most concern
Recent articles