kichigin19 - stock.adobe.com

Dell’s deep learning model to support coral reef conservation

Dell Technologies has developed a deep learning model to speed up labelling and analysis of images of Australia’s Great Barrier Reef in move to support coral reef conservation

Dell Technologies has developed a deep learning model to speed up the labelling and analysis of coral reef images taken by Citizens of the Great Barrier Reef, an Australia-based conservation organisation.

The underwater images, captured by divers and snorkelers who go out to sea on dive boats and other vessels, are used in the annual Great Reef Census (GRC) to ascertain the health of the Great Barrier Reef that stretches 2,400km along the east coast of Australia.

In the first year of the census, some 13,000 images were collected from 240 reefs, making it one of the largest citizen science projects in the world, according to Andy Ridley, founding CEO of Citizens of the Great Barrier Reef.

Making sense of the images, however, can be time-consuming. In the first census, citizen scientists took 1,516 hours to analyse all of the images, with each volunteer spending about seven minutes per image. There is also the issue of accuracy, as citizen scientists tend to be less accurate in identifying reefs than professional scientists.

That’s where deep learning steps in: Dell’s deep learning model can classify a reef’s borders using semantic segmentation in less than 10 seconds, and a citizen scientist would then verify the accuracy of the labelling.

“Dell Technologies is working through the human-machine partnership to take what was close to 144 different categories of reef organisms and dividing them into subcategories,” said Danny Elmarji, vice-president of presales at Dell Technologies Asia-Pacific and Japan.

“And we were able to take a shortlist of 13 categories, and repeatedly refine them until there was only five critical categories, making it really easy for humans to verify what is a reef or not,” he added.

Read more about AI in Australia

Aruna Kolluru, chief technologist for artificial intelligence at Dell Technologies Asia-Pacific and Japan, said the deep learning model will get more accurate with more training, and that work is being done to enhance accuracy by working with new model architectures and datasets.

For now, the model has achieved an accuracy of 67%, which Kolluru said was a “good model” as it is hard to achieve high accuracy for objects in nature. It is expected to be fully deployed in November 2022 and is currently undergoing further testing.

The Australian Institute of Marine Science (Aims) has also been monitoring the health of coral reefs in Australia and other marine ecosystems, using computer vision through a partnership with Accenture.

Using an Aims database of 6,000 images from six different ocean regions, the technology helps to automate the analysis of coral reef images, so that researchers can understand the response of specific coral species to stressful scenarios such as bleaching events, among other things.

These images were previously analysed and labelled manually, limiting the scale to which images could be analysed and ultimately slowing down preservation efforts, said Richard McNiff, rapid innovation director at Accenture’s The Dock, a research and development facility in Dublin, Ireland.

“Using computer vision to automate this process will provide the teams with the means to label images much quicker and with more detail,” said McNiff. “This will also allow for more advanced image analysis – and at scale.”

Read more on Business applications

SearchCIO
SearchSecurity
SearchNetworking
SearchDataCenter
SearchDataManagement
Close