Humanitarian teams in Turkey and Syria are using machine learning to quickly scope out earthquake damage and strategize rescue efforts. An open-source project that was sponsored and developed by the Pentagon’s Defense Innovation Unit and Carnegie Mellon University's Software Engineering Institute in 2019, xView2 has collaborated with many research partners, including Microsoft and the University of California, Berkeley. It uses machine-learning algorithms in conjunction with satellite imagery from other providers to identify building and infrastructure damage in the disaster area and categorize its severity much faster than is possible with current methods.
xView2 has also been utilized elsewhere in the disaster zone, and was able to successfully help workers on the ground be “able to find areas that were damaged that they were unaware of,” he says, noting Turkey’s Disaster and Emergency Management Presidency, the World Bank, the International Federation of the Red Cross, and the United Nations World Food Programme have all used the platform in response to the earthquake.
How AI can help
The algorithms employ a technique similar to object recognition, called “semantic segmentation, ” which evaluates each individual pixel of an image and its relationship to adjacent pixels to draw conclusions.
Below, you can see snapshots of how this looks on the platform, with satellite images of the damage on the left and the model’s assessment on the right—the darker the red, the worse the wreckage. Atishay Abbhi, a disaster risk management specialist at the World Bank, tells me that this same degree of assessment would typically take weeks and now takes hours or minutes.
This is an improvement over more traditional disaster assessment systems, in which rescue and emergency responders rely on eyewitness reports and calls to identify where help is needed quickly. In some more recent cases, fixed-wing aircrafts like drones have flown over disaster areas with cameras and sensors to provide data reviewed by humans, but this can still take days, if not longer. The typical response is further slowed by the fact that different responding organizations often have their own siloed data catalogues, making it challenging to create a standardized, shared picture of which areas need help. xView2 can create a shared map of the affected area in minutes, which helps organizations coordinate and prioritize responses—saving time and lives.
This technology, of course, is far from a cure-all for disaster response. There are several big challenges to xView2 that currently consume much of Gupta’s research attention. First and most important is how reliant the model is on satellite imagery, which delivers clear photos only during the day, when there is no cloud cover, and when a satellite is overhead. The first usable images out of Turkey didn’t come until February 9, three days after the first quake. And there are far fewer satellite images taken in remote and less economically developed areas—just across the border in Syria, for example. To address this, Gupta is researching new imaging techniques like synthetic aperture radar, which creates images using microwave pulses rather than light waves.
Second, while the xView2 model is up to 85 or 90% accurate in its precise evaluation of damage and severity, it also can’t really spot damage on the sides of buildings, since satellite images have an aerial perspective.
Lastly, Gupta says getting on-the-ground organizations to use and trust an AI solution has been difficult. “First responders are very traditional,” he says. “When you start telling them about this fancy AI model, which isn’t even on the ground and it’s looking at pixels from like 120 miles in space, they’re not gonna trust it whatsoever.”
MIT Technology Review
Provided by the IKCEST Disaster Risk Reduction Knowledge Service System
Comment list ( 0 )