Red blood cell (RBC) segmentation and classification from microscopic images is a crucial step for the diagnosis of sickle cell disease (SCD). In this work, we adopt a deep learning based semantic segmentation framework to solve the RBC classification task. A major challenge for robust segmentation and classification is the large variations on the size, shape and viewpoint of the cells, combining with the low image quality caused by noise and artifacts. To address these challenges, we apply deformable convolution layers to the classic U-Net structure and implement the deformable U-Net (dU-Net). U-Net architecture has been shown to offer accurate localization for image semantic segmentation. Moreover, deformable convolution enables free-form deformation of the feature learning process, thus making the network more robust to various cell morphologies and image settings. dU-Net is tested on microscopic red blood cell images from patients with sickle cell disease. Results show that dU-Net can achieve highest accuracy for both binary segmentation and multi-class semantic segmentation tasks, comparing with both unsupervised and state-of-the-art deep learning based supervised segmentation methods. Through detailed investigation of the segmentation results, we further conclude that the performance improvement is mainly caused by the deformable convolution layer, which has better ability to separate the touching cells, discriminate the background noise and predict correct cell shapes without any shape priors.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Electrical and Electronic Engineering
- Health Information Management
- Automated semantic segmentation
- deformable convolution
- sickle cell disease