Abstract
There is a contradiction between the evolution rate of materials and the time resolution of SR-CT characterization in the in situ synchrotron radiation computed tomography (SR-CT) characterization of ultrafast evolution process. The sampling strategy of the ultra-sparse angle is an effective method for improving time resolution. Accurate reconstruction under sparse sampling conditions has always been a bottleneck problem. In recent years, convolutional neural networks have shown outstanding advantages in sparse-angle CT reconstruction given the development of deep learning. However, existing ideas did not consider the expression of highfrequency details in neural networks, limiting their application in accurate SR-CT characterization. A novel high-frequency information constrained deep learning network (HFIC-Net) is proposed in response to this problem. Additional high-frequency information constraints are added to improve the accuracy of the reconstruction results. Further, a series of numerical reconstruction experiments are conducted to verify this new method, and the results indicate that the reconstruction results of HFIC-Net method effectively improve reconstruction quality. This new method uses only eight angle projections to achieve the reconstruction effect of the filtered back projection method (FBP) method in 360 projections. The results of the HFIC-Net method demonstrate clear boundaries and accurate detailed structures, correcting the misinformation caused by using other methods. For quantitative evaluation, the SSIM used to evaluate image structure similarity is increased from 0.1951, 0.9212,and 0.9308 for FBP, FBP-Conv, and DDC-Net, respectively, to 0.9620 for HFIC-Net. Finally, the results of actual SR-CT experimental data indicate that the new method can suppress artifacts and achieve accurate reconstruction, and it is suitable for the in situ SR-CT accurate characterization of ultrafast evolution process.
Full Text
Preamble
High-frequency emphasized neural network reconstruction method for in situ synchrotron radiation ultrafast computed tomography characterization Jing-Wei Li, Yu Xiao, Yong-Cun Li, Xiao-Fang Hu, Guo-Hao Du, and Feng Xu 1 Chinese Academy Sciences, Key Laboratory of Mechanical Behavior and Design of Materials, University of Science and Technology of China, Hefei 230027, China Chinese Academy of Sciences, Key Laboratory of Mechanical Behavior and Design of Materials, University of Science and Technology of China, Hefei 230027, China College of Mechanics, Shanxi Key Lab of Material Strength & Structural Impact, Taiyuan University of Technology, Taiyuan, 030024, China Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201204, China There is a contradiction between the evolution rate of materials and the time resolution of SR-CT characteri- zation in the in situ synchrotron radiation computed tomography (SR-CT) characterization of ultrafast evolution process. The sampling strategy of the ultra-sparse angle is an effective method for improving time resolution.
Accurate reconstruction under sparse sampling conditions has always been a bottleneck problem. In recent years, convolutional neural networks have shown outstanding advantages in sparse-angle CT reconstruction given the development of deep learning. However, existing ideas did not consider the expression of high- frequency details in neural networks, limiting their application in accurate SR-CT characterization. A novel high-frequency information constrained deep learning network (HFIC-Net) is proposed in response to this prob- lem. Additional high-frequency information constraints are added to improve the accuracy of the reconstruction results. Further, a series of numerical reconstruction experiments are conducted to verify this new method, and the results indicate that the reconstruction results of HFIC-Net method effectively improve reconstruction quality. This new method uses only eight angle projections to achieve the reconstruction effect of the filtered back projection method (FBP) method in 360 projections. The results of the HFIC-Net method demonstrate clear boundaries and accurate detailed structures, correcting the misinformation caused by using other meth- ods. For quantitative evaluation, the SSIM used to evaluate image structure similarity is increased from 0.1951, 0.9212,and 0.9308 for FBP, FBP-Conv, and DDC-Net, respectively, to 0.9620 for HFIC-Net. Finally, the results of actual SR-CT experimental data indicate that the new method can suppress artifacts and achieve accurate reconstruction, and it is suitable for the in situ SR-CT accurate characterization of ultrafast evolution process.
Keywords
Accurate SR-CT characterization, CT reconstruction, Sparse angle CT reconstruction problem, High-frequency information constrained, Deep learning
INTRODUCTION
Three-dimensional microstructural visualization of ultra- fast evolution process is very important to study material mechanisms. Synchrotron radiation computed tomography (SR-CT) technology [ ] can be used to perform in situ high-resolution internal microstructure characterization [ Figure (a) presents a schematic of the in situ ob- servation of the ultrafast evolution process aided by SR-CT.
In this process, there is a contradiction between the evolu- tion rate of materials and the time resolution of SR-CT char- acterization. According to the integrity conditions of Tuy– Smith data, it is necessary to continuously collect projection data within the complete 180 angle range during SR-CT ac- quisition [ ]. This process typically requires a long time; however, the microstructural evolution of materials develops rapidly. For example, in laser additive manufacturing, the
This work was supported by the National Nature Science Foundation of China (No. 12027901, No. 12041202), Synchrotron Radiation Joint Fund of University of Science and Technology of China (KY2090000059, KY2090000054).
E-mail address: author, author:
molten pool evolves in a matter of milliseconds [ ]. In this case, the internal microstructure changes rapidly during SR- CT acquisition, generating an incorrect reconstructed tomo- gram. Therefore, improving the time resolution of the in situ SR-CT characterization of ultrafast evolution process while ensuring the accuracy of reconstruction results is important.
Therefore, a contradiction between the evolution time of materials and the sampling time of the CT system in the CT characterization for rapid evolution. Reducing the sampling time of CT systems is an effective means to alleviate this con- tradiction [ ]. As shown in Fig. (a), the CT sampling pro- cess involves rotating the sample to obtain a series of projec- tion data within a range. In this case, reducing the num- ber of projection images (such as collecting 180 projection images every interval) and using sparse angle sampling is an effective approach for shortening CT sampling time [ However, under the condition of ultra-sparse sampling, the quality of reconstruction results obtained using traditional methods (such as classical filtered back projection method (FBP) [ ]) is not satisfactory. As shown in Fig. (b), com- pared with full angle sampling, there is error information in the internal microstructure. Therefore, studying the exact re- construction method under ultra-sparse angle sampling con- ditions is necessary to improve the time resolution of the in- situ SR-CT characterization of ultrafast evolution process and ensure the accuracy of the reconstructed results.
(Color online) (a) Schematic of the in situ observation of an ultrafast evolution process with SR-CT. Contradictions exist be- tween the rapid evolution process and SR-CT acquisition time res- olution. (b) Filtered back projection method reconstruction results under eight angles sparse sampling conditions. (c) ART-TV recon- struction results under eight angles sparse sampling conditions.
Taking the widely used ART-TV [ ] algorithm as an example, the gradient descent method is applied to improve the results obtained by ART to resolve this problem [ ]. As indicated in Fig. (c), reconstruction results obtained using the ART-TV algorithm indicate that the internal detail struc- ture is submerged under the condition of ultra-sparse angle sampling. It is difficult to obtain accurate reconstructed im- ages. In other words, image quality degradation caused by the lack of sampling data is difficult to overcome in the ultra- sparse angle SR-CT reconstruction problem.
In recent years, deep learning has shown outstanding ad- vantages in the field of image processing given the devel- opment of big data and the improvement in computer per- formance [ ]. Convolutional neural networks (CNN) have been widely used in the field of image super resolution recon- struction [ ] and to solve the problem of sparse-angle CT reconstruction [ ]. Wang [ ] revied the current sce- nario of deep learning and CT imaging technology and indi- cated that the effective combination can further promote the development of CT imaging technology. In such deep learn- ing methods, researchers focus on optimizing the sinogram or tomogram domain using CNNs. Jin. et al. proposed the FBP- Conv [ ] method that uses the tomogram reconstructed by FBP as the input for the neural network, and through con- tinuous training, the output is as close as possible to the real label. Dong et al. [ ] used a deep neural network to optimize the sinogram of an incomplete angle in the sinogram domain and reconstructed it directly with FBP, achieving good results.
Subsequently, some researchers started focusing on optimiza- tion ideas based on the CT reconstruction process. For ex- ample, Wang et al. [ ] utilized deep neural networks to di- rectly map sparse angle sinogram to tomogram; this method, referred to as “the dual domain constrained network, DDC- Net”, utilized a deep learning network to optimize in the sino- gram and tomogram domains, achieving positive effects. Li et al. proposed Quad Net [ ], which utilizes FFC transforma- tion to provide a global receptive field for sinogram restora- tion and image refinement. GloReDi [ ] used intermediate- view reconstructed images to provide additional information for the images while expanding the receptive field. Consid- ering the image details enabled further enhancements to the potential application of deep neural networks in accurate SR- CT representation.
A new reconstruction method referred to as the high- frequency information constrained neural network (HFIC- Net) is proposed in this research to solve the problem of accu- rate characterization of ultra-sparse angle SR-CT. The analy- sis of SR-CT imaging system reveals a typical problem: the detailed information of the tomogram is often submerged in the projected sinogram. If this high-frequency information cannot be identified in the sinogram domain, the lost de- tails information cannot be recovered in the subsequent to- mogram domain optimization. Further, in this study, high- frequency detail constraints are added in to the CNN. The de- tailed information of the tomogram contains important struc- tures, and therefore, the accurate SR-CT characterization has strict requirements for detailed information. Thus, we added “high-frequency information” constraint based on the DDC- Net idea for improving the expression of detailed informa- tion. A series of numerical reconstruction experiments are conducted to verify the effectiveness of this new method. The results of the HFIC-Net method are improved and compared with FBP, FBP-Conv, and DDC-Net. The proposed method uses only eight angle projections to achieve the reconstruc- tion effect of the FBP method in 360 projections. For a quan- titative evaluation, the SSIM used to evaluate image struc- ture similarity is increased from 0.1951, 0.9212, and 0.9308 for FBP, FBP-Conv, and DDC-Net, respectively, to 0.9620 for HFIC-Net. Finally, SR-CT experimental images are used to verify this reconstruction method. The novel HFIC-Net method can restore image details and suppress artifacts, and therefore, it is considered suitable for the in situ characteriza- tion of SR-CT in ultrafast evolution processes.
The rest of this paper is organized as follows. In section the launching point and network structure of this new method are introduced. Then, in section , the effectiveness of this method is verified using simulated and real SR-CT data. Fi- nally, the discussion and conclusions are summarized in sec- NEW RECONSTRUCTION METHOD Analyzing the principle of SR-CT imaging system is es- sential to develop a new method for ultra-sparse angle SR- CT reconstruction. A typical problem in the SR-CT imag- ing system is discovered through the analysis: the detailed information of the tomogram is often submerged in a pro- jected sinogram. If this high-frequency information is not observed in the sinogram domain, the lost details cannot be recovered in the subsequent tomogram domain optimization,
which can distort the reconstruction results. In addition, the idea of adding high-frequency information constraints in the CNN is proposed. The model and framework of HFIC-Net are introduced; the HFIC-Net arranged the CNNs in the sino- gram and tomogram domains and trained them via the back propagation of the gradient descent method.
Launching point of developing the new method: Limitations of the current idea
Improving the accuracy of SR-CT reconstruction results is the premise for further research given that the detailed infor- mation in the reconstruction results typically contains impor- tant structures. A detailed analysis of the CT imaging princi- ples is necessary to develop a new Ultra-sparse angle SR-CT optical measurement method. A schematic of projection ac- quisition conducted using SR-CT is shown in Fig. 2 [FIGURE:2] (a). The mathematical model for generating a projected sinogram is represented by R L = R
, where represent the projected integral intensity along the X-ray and target to be detected, respectively. The sinogram was ob- tained by integrating the tomogram, and therefore, some de- tailed signals in the tomogram were buried in the sino- gram. The curve shown in Fig. (b) can be obtained by inte- grating the tomogram along the X-ray direction. Re- gions of interest (ROI) are marked by red arrows in Fig.
The difference between the red and blue curves in Fig. (b) is whether there are small particle in the ROIs in the tomogram.
The difference between red and blue curves was of which is indeed minimal. Tiny structural information in the tomogram can easily be buried in the projected sinogram.
However, accurate representations of detailed information have not been considered by current deep learning ideas. The sinogram is integral to the tomogram along the X-ray direc- tion, and therefore, high-frequency information of internal details can be easily lost. Considering the idea of a DDC- Net as an example, the lost details cannot be recovered in subsequent optimization in the tomogram domain if the high- frequency information cannot be observed in the sinogram domain. This will lead to a distorted reconstruction result, limiting its application potential in accurate SR-CT character- ization. Therefore, adding high-frequency information con- straints on deep neural networks is necessary to improve the representation of the detailed information.
The gradient transformation of the tomogram can highlight the expression of detailed information in its sinogram. The integral curve in Fig. (d) reflect this characteristic. The con- tribution of detailed information to the integral value is in- dicated by the difference between the peak values of labeled points that refer to the integral values with and without the small particle in the ROI region. Compared with 1.507% in (b), the difference ratio of the green marker points in (d) is as high as 14.23%. Therefore, we add “high- frequency information” constraints in the neural network to learn the detailed information of the tomogram.
In response to this problem, “high-frequency information” constraints are added to drive the learning direction of the . The red arrow marks small particles of interest. (b) Integral curve of Fig. (a) along the X-ray direction. (c) Gradient of tomogram. (d) Integral curve of (c) along the X-ray direction. (e) Schematic of the process for extracting high-frequency information. neural network, as shown in Fig.
The “high-frequency information” constraint is imple- mented in two steps: perform gradient transformation on to- mogram to obtain , and perform radon trans- formation on the gradient image In recent years, the continuous development of deep learn- ing has provided new ideas to solve serious ill-posed prob- lems such as ultra-sparse angle SR-CT reconstruction. The dual domain learning idea is used considering the physical process of CT reconstruction. Given this context, emphasiz- ing the expression of high-frequency information in a neural network can help improve the accuracy of the reconstruction results. Therefore, designing a neural network considering the high-frequency detail information is a primary task for alleviating the problem of ultra-sparse angle SR-CT recon- struction.
Finally, a novel method suitable for accurate SR-CT char- acterization under ultra-sparse angle conditions is proposed.
This method is referred to as HFIC-Net. Deployment of high-frequency information constrained neural network The HFIC-Net framework is illustrated in Fig.
HFIC-Net comprises two deep neural networks
These networks drive learning in the sinogram and tomogram domains, respectively. In the sinogram domain, restores the sparse angle sinogram to a high-quality sinogram. As- suming that as an ultra-sparse angle sinogram in- put for HFIC-Net, it is converted to a fully sampled sinogram by mapping . Subsequently, is con- verted to the tomogram domain through the FBP algorithm to obtain the reconstructed result . Next, high-frequency information on is extracted to obtain . Then, mapping performs super-resolution reconstruction on the tomogram, yielding Finally, the loss of the HFIC-Net comprises sinogram content , high-frequency information loss , and tomogram content loss are updated in reverse through gradient descent. (Color online) (a) Architecture of the proposed HFIC-Net. (b) Network structures of of the HFIC-Net. are five-layer deep neural networks with the same input size and output size as Sinogram content loss : In the sinogram domain, the mean square error (MSE) loss between the high quality sino- and real label is used as the sinogram content loss . The richness of the sampled projection information di- rectly determines the quality of the tomogram. The quality of reconstructed tomogram can be effectively improved if the degradation degree of the projected sinogram can be reduced.
Mathematically, the sinogram content loss of HFIC-Net can be expressed as
L 1 ( θ ) = 1
i =1 ∥ G 1 ( θ, x i ) − x i ∥ 2 labelequ 1 (1)
where , and represent the input sparse angle projection, real full angle projected sinogram, mapping in the sinogram domain, number of training data pairs, and training parameters in the entire network, respectively.
High-frequency information loss : The high-frequency information loss is added in the network considering the importance of internal detail information.
The MSE loss between the high- frequency information feature map and real label is used as the high-frequency information loss. Mathematically, the high-frequency information loss of HFIC-Net can be expressed as
L 2 ( θ ) = 1
labelequ where , and represent the input sparse angle projection, real label, mapping in the sinogram domain, high-frequency information extraction operation, number of training data pairs, and training parameters in the entire net- work.
Tomogram content loss : In the tomogram domain, the mean square error between the high-quality tomogram erated by and real label is used as the tomogram content loss. Some small errors in the sinogram domain are consid- erably magnified after FBP reconstruction. Therefore, it is necessary to achieve further improvements in the tomogram domain. Mathematically, the tomogram loss of HFIC-Net can be expressed as
L 3 ( θ ) = 1
labelequ where , and represent the input sparse angle projection, FBP reconstruction, real tomogram, map- ping in the tomogram domain, number of training data pairs, and training parameters in the entire network, respectively.
The final objective of the proposed HFIC-Net is defined by combining these three losses as
L loss = L 1 ( θ ) + L 2 ( θ ) + L 3 ( θ ) (4)
Network structure of the HFIC-Net: As shown in Fig. are composed of an encoding–decoding neural network. The coding module is used to extract feature in- formation from the input image. The encoder comprises five convolutional layers; the size of convolution kernel is the step size is 2, and the number of channels in each layer is , and , respectively. The activation func- tions of these five convolutional layers are all Relu. Batch normalization is performed after the activation of each layer.
The decoder comprises five layers of deconvolution. The de- coding module recombines the acquired feature information into an image. The first four layers of the convolution kernels in size, with a step size of 2, and the number of channels are , and 64. The fifth-layer network
convolution kernel is , the step size is 2 , and the number of channels is 3. The activation functions of these five convo- lutional layers were all Relu. The output size of HFIC-Net is the same as the input; both are RESULTS AND DISCUSSION The effectiveness of the proposed method is verified by a series of simulated and real SR-CT experimental data. HFIC- Net can only use eight angle projections to achieve the re- construction effect of the FBP method in 360 projections.
Meanwhile, the HFIC-Net shows outstanding advantages in the accurate reconstruction of image internal details. This method corrects the error information of reconstruction. The results of real SR-CT experimental data show that the ultra- sparse angle CT reconstruction method proposed in this paper can alleviate the contradiction between the evolution rate and SR-CT time resolution in the process of ultrafast evolution.
This new method suppresses artifacts and ensures the accu- racy of the reconstruction results, making it suitable for the in situ SR-CT accurate characterization of the ultrafast evolution process.
A. Training configuration and performance evaluation
The HFIC-Net was verified using simulation and real ex- perimental data.
All training work was conducted on In- ter(R) Core(TM) i7-8700
20 GHz
CPU and NVIDIA RTX 2070 GPU. All experiments were run in the environment of Python3.7, with CUDA10.0 and CUDNN-v7.4 for accelera- tion, and the TensorFlow deep learning framework to imple- ment the proposed method. We applied the Adam optimizer for the HFIC-Net. The learning rate was fixed at 0.0002, and the exponential decay rates for the moment estimates in the Adam optimizer were 0.5 and 2 = 0 . The weight balance parameters of different losses were set to 1.
If the HFIC-Net method was used in in situ CT analysis in a training environment based on Inter(R) Core(TM) i7-8700
20 GHz
CPU and NVIDIA RTX 2070 GPU, it would incur the following computation cost: (1) Preprocessing time for forming the dataset: This part of the time referred to the time required to generate low-quality projected sinograms, high- frequency information constraints, and sinogram labels. This part of the time was approximately 354 s. (2) HFIC-Net train- ing time: The training time for the HFIC-Net was approx- imately 33 h. The actual computational cost of deploying HFIC-Net would be approximately 33 h more than that when not using this method.
Quantitative parameters were adopted to evaluate the HFIC-Net. The parameters were used for evaluating the dif- ference between the reconstructed and original images, in- cluding (1) structural similarity index (SSIM), (2) normalized mean square criterion D, and (3) normalized average absolute distance criterion ]. These parameters were respec- tively calculated as r + C SSIM = where represents the pixel value of the original image; represents the pixel value of the reconstructed image; represent the average of all pixel values in an image; represents the total number of pixels in the image; represent the standard deviations; and represents the co- variance. Constants are set as in [ ]. Parameter SSIM is a criterion for structural similarity between the re- constructed and original images, and the value range is The larger the value, the higher is the reconstruction accuracy.
Parameter are used to evaluate the relative errors of reconstruction. Parameter D indicates a large deviation in few points of the reconstructed image, while indicates a small deviation in most points of reconstructed image. The smaller the parameter values of , the higher is the reconstruc- tion quality.
Reconstruction results of simulation data
1. Comparison with other methods based on simulation data
The effectiveness of the HFIC-Net was verified through nu- merical experiments with simulation data. The simulation data were a series of randomly generated particle images: 6400 randomly generated images were used as the training set, and another 100 images were generated as the testing set.
A complete sinogram was sampled in the range of be used as a label for the sinogram domain. Then, an ultra- sparse angle sinogram of 8 projections was used as the input of HFIC-Net, and high-quality model images were used as the label for tomogram domain. Finally, the reconstruction tomogram was output. (Eight angles)
(Color online) (a) Original image. (b) FBP reconstruction results at eight angles. (c) FBP reconstruction results at 360 angles. (d) HFIC-Net reconstruction results at eight angles The effectiveness of the new method was verified by com- paring the simulation data test results with those of the most commonly used FBP method. SSIM increased nearly by the HFIC-Net compared to the result of the FBP with eight angles. In addition, the HFIC-Net can only use eight angle projections to achieve the reconstruction effect of the FBP method in 360 projections. Fig. (c) and (d) appear to be almost consistent. A comprehensive quantitative evaluation revealed that the SSIM of the image structure similarity in- creased from 0.1951 of the eight-angle FBP to 0.9620. Fur- ther, in terms of image relative error, the HFIC-Net was supe- rior or equivalent to that of the full-angle FBP.
The reconstruction quality of the HFIC-Net was compared with that of the FBP-Conv, DDC-Net, and SART-FDTV- ASD [ ] methods. The results of one of the 100 testing model with eight angles by several methods are illustrated in . Under the eight angles sampling conditions, although the result of FBP-Conv, SART-FDTV-ASD, and DDC-Net methods was considerably better than that of FBP, the de- tails of the reconstruction results were biased. The recon- struction quality of the HFIC-Net was improved compared to those of the other methods. Image details were preserved, and internal artifacts were significantly suppressed. Fig. (although the result of FBP-Conv, k) shows an absolute differ- ence between results of each method and the original image to demonstrate the effect of the new method more clearly. The difference between the results of the HFIC-Net and original image was minimal. Figure (l) and (m) show the profiles along the blue and red solid line in Fig. (a), respectively.
Through visual inspection, the gray distribution value of the HFIC-Net method is the closest to the original image. The results of the other algorithms indicated by the red arrow in- dicate error information, which is far from the real label. The comparison results indicate that the proposed new method has advantages in detail characterization.
SART- FDTV- HFIC-Net DDC-Net, SART-FDTV-ASD, and HFIC-Net reconstruction results at eight angles, respectively. (g)–(k) represent the absolute differ- ences of (b)–(f) with respect to the original image, respectively. (l) Profiles along the blue solid line in Fig. (a). (m) Profiles along the red solid line in Fig.
For a quantitative analysis, the average calculation results of 100 testing images are listed in table . The reconstruc- tion results of HFIC-Net are better than those of FBP-Conv, DDC-Net, and SART-FDTV-ASD. The parameter SSIM is increased from 0.1951, 0.7937, 0.9212, and 0.9309 of FBP, SART-FDTV-ASD, FBP-Conv, and DDC-Net to 0.9620, re- spectively.
The parameter D was decreased from 1.2968, 0.2786, 0.1247, and 0.1448 of FBP, SART-FDTV-ASD, FBP- Conv, and DDC-Net to 0.0978, respectively.
HFIC-Net achieved a good performance for parameter R that empha- sized the small error of most points.
FBP-Conv, DDC-Net, SART-FDTV-ASD, and HFIC-Net for ROI, respectively. Red arrows mark major areas of visual difference.
This new method shows outstanding advantages in local detail representation. The ROI of red rectangle in Fig. is enlarged in Fig. (b) to further demonstrate the perfor-
mance of this new method. Figure (c)–(f) corresponded to the results of different algorithms for ROI, respectively. The major areas of visual difference are marked by red arrows.
In Fig. (c), the reconstruction results of FBP had almost no effective information. Local detail structure information is submerged. In Fig. (d)–(f), the reconstruction results are wrong, i.e., original small particles disappear. In Fig. the HFIC-Net reconstruction result demonstrated clear edges and accurate structures. The comparison results indicated that the proposed new method has advantages in detail character- ization.
SART- FDTV- HFIC-Net
The new method has significant advantages in terms of SSIM. The parameter SSIM increases from 0.3679, 0.4000, 0.4144, and 0.4247 of FBP, DDC-Net, FBP- Conv, and SART-FDTV-ASD to 0.7318, respectively. The parameter D decreases from 1.6070, 1.4716, 1.2590, and 1.2445 of FBP, DDC-Net, SART-FDTV-ASD, and FBP-Conv to 0.7691, respectively.
The parameter R decreases from 0.2214, 0.1693, 0.1437, and 0.1343 FBP, DDC-Net, SART- FDTV-ASD, and FBP-Conv to 0.0737, respectively.
2. Ablation Study
FBP-Conv was used as the baseline for adding compo- nents to further evaluate the effectiveness of each module in HFIC-Net. The configuration involving comparison includes the following four groups: (1) Baseline FBP-Conv, (2) dual domain DDC-Net without high-frequency information con- straints, (3) FBP-Conv constrained by only high-frequency information in a sinogram domain, and (4) HFIC-Net.
Quantitative results in table 4 [TABLE:4] confirmed that adding high- frequency information constraints to the HFIC-Net method was beneficial for optimizing the quality of sparse angle CT reconstruction.
SSIM significantly improved compared to the baseline FBP-Conv method.
Compared to not adding high-frequency information constraints, SSIM increased from 0.9309 to 0.9620, confirming the effectiveness of adding high-frequency information constraints.
Thus, this proposed new method showed superior perfor- mance in detail restoration and artifact reduction and can be considered an accurate reconstruction method.
Config Validation of real experimental data The HFIC-Net method was applied to the reconstruction of actual SR-CT experimental projection data to evaluate the effectiveness of the new method in practical applications.
The experiment was conducted at the BL13W1 beamline of Shanghai Synchrotron Radiation Facility (SSRF). The real experimental data comprised a series of tomograms of par- ticle samples. The training set consisted of 4300 tomograms, and another 50 were selected as the testing set to verify the training results. The number of sparse sampling angles was 8 (b)–(f) FBP, FBP-Conv, DDC-Net, SART-FDTV-ASD, and HFIC- Net reconstruction results at eight angles, respectively. (g)–(k) are the absolute differences of (b)–(f) with respect to the original image, respectively. (l) Profiles along the blue solid line in Fig. (a). (m) Profiles along the red solid line in Fig.
(b) Local enlarged region of Fig. (a). (c)–(g) Results of FBP, FBP- Conv, DDC-Net, SART-FDTV-ASD, and HFIC-Net at eight angles for ROI, respectively. Red arrows mark major areas of visual differ- ence.
Compared to other methods, HFIC-Net also shows cer- tain advantages. The results of one of the 50 testing model with eight angles obtained using several methods are shown in Fig.
Compared with the other algorithms, the new method improves the quality of reconstruction with a clear boundary and complete structure. The reconstruction results of FBP have serious truncation artifacts under the eight angles sampling conditions, and the detailed structural information is distorted. Compared with the FBP reconstruction results, the artifacts of FBP-conv, SART-FDTV-ASD, and DDC-Net method are significantly suppressed and the visual effect is improved. Unfortunately, in terms of the detailed structure, there exists considerable error information. The reconstruc- tion quality of the HFIC-Net is improved when compared to the quality obtained using the other methods. Image details are also preserved, and the internal artifacts are significantly suppressed. Figs. (g)–(k) show the absolute difference be- tween the results of each method and the original image to demonstrate the effect of new method more clearly. Figure (l) and (m) indicated the profiles along the blue and red solid line in fig. (a), respectively. Through visual inspection, the gray distribution value of the HFIC-Net method is the closest to the original image. The results of other algorithms indi- cated by the red arrow had error messages, which is far from the real label. The comparison results demonstrate that the proposed new method showed advantages in detail character- ization.
This new method also demonstrated advantages in local de- tail representation. The ROI indicated by the red rectangle in (a) was enlarged in Fig. (b) to further demonstrate the performance of this new method. Figure (c)–(g) corre- sponded to the results of different algorithms for ROI, respec- tively. The main differences between results of these methods and original images were marked by the red arrows. In Fig. (c), the detailed structure information of FBP reconstruction is almost completely lost. In Figs. (d)–(f), the reconstruc- tion results are wrong: there is no particle gap. In Fig. the HFIC-Net reconstruction result demonstrates clear edges and accurate structures. The comparison results demonstrate that the proposed method can realize accurate reconstruction.
With the rapid development of deep learning, some re- searchers have begun to construct deep neural network based on CT reconstruction process to overcome the bottleneck problem of in-situ SR-CT characterization of ultrafast evo- lution process.
The deep neural networks are arranged in the sinogram and tomogram domains, and some progress had been achieved. However, a good visual effect does not indi- cate the accurate reconstruction of a tomogram. Therefore, we added a “high-frequency information constraint,” which can reflect the expression of real detail information based on DDC-Net to improve the high preservation of reconstructed results.
The proposed HFIC-Net achieved the best score on quan- titative evaluations such as SSIM. The HFIC-Net recovers accurate structure and weak detail information under ultra- sparse angle conditions. Fig. 8 [FIGURE:8] shows that the HFIC-Net method accurately reconstructed the gaps between particles.
This weak but important information was crucial for ana- lyzing the mechanism of material evolution; however, other methods easily lost these details. In conclusion, the HFIC- Net method had advantages in image detail restoration, which is expected to aid in improving the accuracy of SR-CT char- acterization under ultra-sparse angle conditions.
CONCLUSION
A novel high-frequency information constraint network called HFIC-Net was proposed to solve the ultra-sparse angle reconstruction problem of in-situ SR-CT during rapid evolu- tion. In this method, high-frequency information loss con- sidering detailed structural constraints was added based on the “sinogram–tomogram domain” joint optimization. The effectiveness of this new method was verified by numerical simulation and real SR-CT experimental data. (1) Three commonly used image quality evaluation param- eters were used in the numerical experiment of simulation data to evaluate the reconstruction results of eight angles of sparse sampling tomogram. SSIM, D, and R were used to evaluate the reconstruction quality. This new method only uses eight angle projections to achieve the reconstruction ef- fect of the FBP method with 360 projections. The quantita- tive results indicated that adding high-frequency information constraint improved the similarity of the image structure. (2) Adding “high-frequency information” loss had a posi- tive effect on the accurate reconstruction of HFIC-Net. In the ROI of the tomogram, the reconstruction result obtained us- ing the DDC-Net method had error information, whereas the new method had complete and clear details. This new method had advantages in detail restoration and artifact reduction. (3) An actual experimental data test was conducted to eval- uate the effect of HFICNet in practical application. The new method improved the quality of the tomogram and had advan- tages in restoring particles details and accurate characteriza- tion.
In future work, HFIC-Net can combine current advanced deep learning approaches to provide richer information con- straints for deep neural networks. In addition, its applicability
in in situ experimental environments can be further evaluated.
Adaptive learning can also be used to meet the applicability under different imaging conditions and material sample sys- tems.
In conclusion, the HFIC-Net proposed in this paper is suit- able for the in situ characterization of SR-CT in ultrafast evo- lution process.
ACKNOWLEDGMENTS
This research was supported by the National Nature Sci- ence Foundation of China (No.12027901, No.12041202) and Synchrotron Radiation Joint Fund of University of Science and Technology of China (KY2090000059, KY2090000054).
CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Jingwei Li: Software, Data curation, Writing - Original draft.
Yu Xiao: Validation, Writing - Review & editing. J.F. Ji, H. Guo, Y.L. Xue et al., The new X-ray imaging and biomedical application beamline BL13HB at SSRF. Nucl. Sci.
Tech. , 2662–2666 (2023). doi:10.1007/s41365-023-01349- S.K. Han, Q.H. Li, MAA Newton et al., Research on Cotton Yarn Based on Synchrotron Radiation 3D Micro-CT Imaging.
Fibers Polym , 543–555 (2024). doi:10.1007/s12221-023- Y.D. Wang, G.Y. Peng, Y.J. Tong et al., Effects of some factors on X-ray spiral micro-computed tomography at syn- chrotron radiation. Acta. Physica. Sinica. , 054205 (2012).
D.J. Ji, G.R. Qu, C.H. Hu et al., Contrast Enhancement Method Based on Synchrotron Radiation CT Image Re- construction. Laser Optoelectron. P. 221024 (2020).
M. Wang, Y.W. Chen, X.F. Hu et al., Error mechanism of light source for synchrotron radiation computed tomogra- phy technique. Acta. Physica. Sinica. , 6202–6206 (2008).
J.Y. Buffière, H. Proudhon, E. Ferrie et al., Three dimensional imaging of damage in structural materials using high resolution micro-tomography. Nucl. Instrum. Methods Phys. Res., Sect.
S.M.H. Hojjatzadeh, N.D. Parab, W. Yan et al., Pore elimina- tion mechanisms during 3D printing of metals. Nat. Commun. , 3088 (2019). doi:10.1038/s41467-019-10973-9 A. E. Scott, M. Mavrogordato, P. Wright et al., In situ fibre fracture measurement in carbon–epoxy laminates using high resolution computed tomography. Compos. Sci. Technol.
F. Xu, Y. Niu, X.F. Hu et al., Role of Second Phase Powders on Microstructural Evolution During Sintering. Exp. Mech. 57–62 (2014). doi:10.1007/s11340-013-9716-7 J. Z. Hu, Y. Cao, T. D. Wu et al., High-resolution three- dimensional visualization of the rat spinal cord microvascu- Yongcun Li: Writing - Review & editing. Guohao Du: Writ- ing - review & editing. Xiaofang Hu: Supervision, Funding acquisition. Feng Xu: Conceptualization, Funding acquisi- tion.
DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. lature by synchrotron radiation micro-CT. Med. Phys. 101904 (2014). doi:10.1118/1.4894704.
B. D. Smith, Image reconstruction from cone-beam projec- tion ; Necessary and sufficient conditionas and reconstruc- tion methods. IEEE T. Med. Imaging Mi , 14 (1985).doi: 10.1109/TMI.1985.4307689 R. Cunningham, C. Zhao, N. Parab et al., Keyhole thresh- old and morphology in laser melting revealed by ultrahigh- speed x-ray imaging. Science 849–852 (2019).doi:
F. Xu, B. Dong, X.F. Hu et al., In situ investigation on rapid microstructure evolution in extreme complex environ- ment by developing a new AFBP-TVM sparse tomography al- gorithm from original CS-XPCMT. Opt. Laser Eng. , 124– Y.Q. Yang, W.C. Fang, X.X. Huang et al., A new imaging mode based on X-ray CT as prior image and sparsely sampled pro- jections for rapid clinical proton CT. Nucl. Sci. Tech. , 126 (2023). doi:10.1007/s41365-023-01280-6.
L.A. Shepp, B.F. Logan. The Fourier reconstruction of a head section. IEEE T. Nucl. Sci. , 21–43 (2013).doi: 10.1109/TNS.1974.6499235 noise removal algorithms. Physica D: Nonlinear Phenomena , 259–268 (1992).doi: 10.1016/0167-2789(92)90242-F M. Ertas, I. Yildirim,M. Kamasak et al., An iterative tomosyn- thesis reconstruction using total variation combined with non- local means filtering. Biomed. Eng. Online , 65 (2014).doi: 10.1186/1475-925X-13-65 A.H. Andersen, A.C. Kak. Simultaneous Algebraic Recon- struction Technique (SART): A Superior Implementation of the ART Algorithm. Ultrasonic Imaging , 81–94 (1984).doi: 10.1016/0161-7346(84)90008-7 H. Chen, Y. Zhang, W.H. Zhang et al., Low-dose CT via con- volutional neural network. Biomed. Opt. Express , 679–694 (2017). doi:10.1364/boe.8.000679
J.Y. Ma, Y. Ren, P. Feng et al., Sinogram denoising via at- tention residual dense convolutional neural network for low- dose computed tomography. Nucl. Sci. Tech. , 41 (2021).
Framing U-Net volutional Framelets: Application Sparse-View Imaging (2018).
X.Y. Guo, L. Zhang, Y.X. Xing. Study on analytical noise prop- agation in convolutional neural network methods used in com- puted tomography imaging. Nucl. Sci. Tech. , 77 (2022).
H. Chen, Y. Zhang, Y.J. Chen et al., LEARN: Learned Ex- perts’ Assessment-Based Reconstruction Network for Sparse- Data CT. IEEE T. Med. Imaging. , 1333–1347 (2018).
X. Guo, X.Z. Sang, D. Chen. et al., Real-time optical re- construction for a three-dimensional light-field display based on path-tracing and CNN super-resolution, Opt. Express. 37862–37876 (2021).doi: 10.1364/OE.441714 Z.C. Zhang, H.K. Liang, X. Dong et al., A Sparse-View CT Re- construction Method Based on Combination of DenseNet and Deconvolution. IEEE T. Med. Imaging. , 1407–1417 (2018).
H. Chen, Y. Zhang, M.K. Kalra et al., Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN). IEEE T. Med. Imaging. , 2524–2535 (2017).
C. Zhang, Y.S. Li,G.H. Chen et al., Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS).
Med. Phys., , 5765–5781 (2021). doi:10.1002/mp.15183 G.Y. Chen, X. Hong, Q.Q. Ding et al., AirNet:Fused analytical and iterative reconstruction with deep neural network regular- ization for sparse-data CT. Med. Phys. , 2916–2930 (2020).
A.R. Podgorsak, M. Bhurwani, C.N. Ionita CT artifact cor- rection for sparse and truncated projection data using genera- tive adversarial networks. Med. Phys. , 615–626 (2020).doi: 10.1002/mp.14504 F.Y. Jiao, Z.G. Gui, K.P. Li et al., A Dual-Domain CNN-Based Network for CT Reconstruction. IEEE Access , 71091–71103 (2021).doi: 10.1109/ACCESS.2021.3079323 H.K. Yang, K.C. Liang, K.J. Kang et al., Slice-wise reconstruc- tion for low-dose cone-beam CT using a deep residual convo- lutional neural network. Nucl. Sci. Tech. , 28–36 (2019).doi: 10.1007/s41365-019-0581-7 Y.J. Ma, Y. Ren, P. Feng et al., Sinogram denoising via at- tention residual dense convolutional neural network for low- dose computed tomography. Nucl. Sci. Tech. , 41 (2021).doi: 10.1007/s41365-021-00874-2 G. Wang. A Perspective on Deep Imaging. IEEE Access 8914–8924 (2017). doi: 10.1109/ACCESS.2016.2624938 K.H. Jin, M.T. Mccann, E. Froustey et al., Deep Con- volutional Neural Network for Inverse Problems in Imag- ing. IEEE T. Image Process. , 4509–4522 (2016).doi: 10.1109/TIP.2017.2713099 J. Fu, J.B. Dong, F. Zhao. A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomog- raphy with Incomplete Data. IEEE T. Image Process. , 2190– 2202 (2019).doi: 10.1109/TIP.2019.2947790 W. Wang, X.G. Xia, C.J. He et al., An End-to-End Deep Network for Reconstructing CT Images Directly From Sparse Sinograms. IEEE T. Comput. Imag. , 1548–1560 (2020).doi: 10.1109/TCI.2020.3039385 Z.L. Li, Q. Gao,Y.P. Wu et al., Quad-Net: Quad-Domain Net- work for CT Metal Artifact Reduction. IEEE T. Med. Imaging , 1866–1879, (2024).doi:10.1109/tmi.2024.3351722.
Z.L. Li, C. L. Ma, J. Chen et al . Learning to Distill Global Representation for Sparse-View CT.
2023 IEEE/CVF Interna-
tional Conference on Computer Vision (ICCV). 21139–21150 (2023).doi: 10.1109/ICCV51070.2023.01938.
Y. Xiao, F. Xu, K. Shen et al., A novel CT reconstruc- tion algorithm for incomplete projection based on informa- tion repairment. Opt. Laser Eng. , 207–213 (2018).doi:
Z. Wang, A.C. Bovik, H.R. Sheikh et al., Image qual- ity assessment: from error visibility to structural simi- larity. IEEE T. Image Process. , 600–612 (2004). doi: 10.1109/TIP.2003.819861 Z.S. Yu, X.Y. Wen & Y. Yang. Reconstruction of Sparse- View X-ray Computed Tomography Based on Adaptive To- tal Variation Minimization. Micromachines , 2245 (2023).