TY - CONF
T1 - Automatic X-ray scattering image annotation via double-view Fourier-Bessel convolutional networks
AU - Guan, Ziqiao
AU - Qin, Hong
AU - Yager, Kevin
AU - Choo, Youngwoo
AU - Yu, Dantong
N1 - Funding Information:
The authors wish to thank Boyu Wang for the residual network features. This research utilizes experimental samples, real image data, and computing resources of the Center for Functional Nanomaterials, the National Synchrotron Light Sources I and II, and the Scientific Data and Computing Center (SDCC) at Brookhaven National Laboratory under Contract No. DE-SC0012704. This work was partially supported by NSF IIS-1715985. The authors would like to thank Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system, which was made possible by a 1.4M National Science Foundation grant (#1531492).
Funding Information:
Acknowledgements. The authors wish to thank Boyu Wang for the residual network features. This research utilizes experimental samples, real image data, and computing resources of the Center for Functional Nanomaterials, the National Synchrotron Light Sources I and II, and the Scientific Data and Computing Center (SDCC) at Brookhaven National Laboratory under Contract No. DE-SC0012704. This work was partially supported by NSF IIS-1715985. The authors would like to thank Stony Brook Research Computing and Cyber-infrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system, which was made possible by a $1.4M National Science Foundation grant (#1531492).
Publisher Copyright:
© 2018. The copyright of this document resides with its authors.
PY - 2019
Y1 - 2019
N2 - X-ray scattering is a key technique towards material analysis and discovery. Modern x-ray facilities are producing x-ray scattering images at such an unprecedented rate that machine aided intelligent analysis is required for scientific discovery. This paper articulates a novel physics-aware image feature transform, Fourier-Bessel transform (FBT), in conjunction with deep representation learning, to tackle the problem of annotating x-ray scattering images with a diverse label set of physics characteristics. We devise a novel joint inference model, Double-View Fourier-Bessel Convolutional Neural Network (DVFB-CNN) to integrate feature learning in both polar frequency and image domains. For polar frequency analysis, we develop an FBT estimation algorithm for partially observed x-ray images, and train a dedicated CNN to extract structural information from FBT. We demonstrate that our deep Fourier-Bessel features well complement standard convolutional features, and the joint network (i.e., DVFB-CNN) improves mean average precision by 13% in multilabel annotation. We also conduct transfer learning on real experimental datasets to further confirm that our joint model is well generalizable.
AB - X-ray scattering is a key technique towards material analysis and discovery. Modern x-ray facilities are producing x-ray scattering images at such an unprecedented rate that machine aided intelligent analysis is required for scientific discovery. This paper articulates a novel physics-aware image feature transform, Fourier-Bessel transform (FBT), in conjunction with deep representation learning, to tackle the problem of annotating x-ray scattering images with a diverse label set of physics characteristics. We devise a novel joint inference model, Double-View Fourier-Bessel Convolutional Neural Network (DVFB-CNN) to integrate feature learning in both polar frequency and image domains. For polar frequency analysis, we develop an FBT estimation algorithm for partially observed x-ray images, and train a dedicated CNN to extract structural information from FBT. We demonstrate that our deep Fourier-Bessel features well complement standard convolutional features, and the joint network (i.e., DVFB-CNN) improves mean average precision by 13% in multilabel annotation. We also conduct transfer learning on real experimental datasets to further confirm that our joint model is well generalizable.
UR - http://www.scopus.com/inward/record.url?scp=85084011491&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084011491&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85084011491
T2 - 29th British Machine Vision Conference, BMVC 2018
Y2 - 3 September 2018 through 6 September 2018
ER -