51黑料吃瓜在线观看,51黑料官网|51黑料捷克街头搭讪_51黑料入口最新视频

設(shè)為首頁 |  加入收藏
首頁首頁 期刊簡介 消息通知 編委會 電子期刊 投稿須知 廣告合作 聯(lián)系我們
基于生成對抗網(wǎng)絡(luò)的肝臟CT圖像分割

Liver CT image segmentation based on generative adversarial network

作者: 鄧鴻  鄧雅心  丁廷波  嚴(yán)中紅  王富平  陳忠敏  
單位:重慶理工大學(xué)藥學(xué)與生物工程學(xué)院(重慶 400054) <p>通信作者:陳忠敏,教授。E-mail:[email protected]</p> <p>&nbsp;</p>
關(guān)鍵詞: 生成對抗網(wǎng)絡(luò);肝臟CT圖像分割;全卷積神經(jīng)網(wǎng)絡(luò);深度學(xué)習(xí);肝臟3D重建 
分類號:R318.04
出版年·卷·期(頁碼):2021·40·4(367-376)
摘要:

目的 從腹部計算機(jī)斷層掃描(computed tomography, CT)圖像中分割出肝臟區(qū)域?qū)τ诟闻K疾病早期診斷、肝臟大小估計以及3D重建十分重要,精準(zhǔn)快速地分割出肝臟邊緣成為研究要點。方法 采用公開發(fā)表的肝臟腫瘤數(shù)據(jù)集為研究對象,融合生成對抗網(wǎng)絡(luò)和UNET網(wǎng)絡(luò)對CT圖像實現(xiàn)肝臟的自動分割。首先將腹部CT圖像輸入到UNET網(wǎng)絡(luò)進(jìn)行分割預(yù)測,然后通過生成對抗網(wǎng)絡(luò)(generative adversarial networks, GAN)進(jìn)行對抗訓(xùn)練,使得預(yù)測結(jié)果更加接近于真實結(jié)果,同時在進(jìn)行對抗訓(xùn)練的過程中探索了不同的距離約束函數(shù)對于分割結(jié)果的影響;預(yù)測的分割結(jié)果通過Dice分?jǐn)?shù)(dice similarity coefficient,Dice)、IoU分?jǐn)?shù)(intersection over union, IoU)、像素精確度(pixel accuracy,PA)、相對體積誤差(relative volume difference,RVD)以及相對表面積誤差(relative surface area error,RSSD)在CT-核磁健康腹部器官分割挑戰(zhàn)數(shù)據(jù)集[Combined (CT-MR) Healthy Abdominal Organ Segmentation Challenge Data, CHAOS]數(shù)據(jù)集上進(jìn)行評價。結(jié)果 L2距離約束的GAN-UNET網(wǎng)絡(luò)可以很好的對肝臟進(jìn)行分割,其Dice、IoU和PA分別達(dá)到了94.9%、91.3%、99.4%,相比于UNET的Dice、IoU和PA為92.3%、86.7%、95.8%有明確的提升。在三維指標(biāo)中,本文的方法在RVD、RSSD為0.026、0.079,相比于UNET的0.042、0.191有明顯下降。結(jié)論 通過對UNET網(wǎng)絡(luò)進(jìn)行生產(chǎn)對抗訓(xùn)練以及在訓(xùn)練過程中引入距離約束函數(shù)可以提高肝臟分割的性能,肝臟分割結(jié)果可以應(yīng)用于計算機(jī)輔助診斷系統(tǒng)中。

Objective Segmenting the liver area from computed tomography (CT) images of the abdomen is the first step in early diagnosis of liver disease, liver size estimation and 3D reconstruction, and it is also a very important step. How to accurately and quickly segment the edge of the liver is a problem worth studying. Methods Using publicly published liver tumor datasets as the research object, this paper merges the generative adversarial network and the UNET network to realize the automatic segmentation of the liver on the CT image. First, the abdominal computer tomography CT image is input into the UNET network for segmentation prediction, and then generative adversarial networks conducts adversarial training to make the prediction result closer to the real result. At the same time, the impact of different distance constraint functions on the segmentation result is explored during the adversarial training process; the predicted segmentation result passes the Dice score, IoU score, Pixel Accuracy (PA), relative volume error (RVD), and relative surface area error (RSSD) are evaluated on CHAOS (Combined (CT-MR) Healthy Abdominal Organ Segmentation Challenge Data) data set. Results The L2 distance constrained GAN-UNET network can segment the liver well, and its Dice, IoU and PA reached 94.9%, 91.3%, and 99.4% respectively, compared with UNET's Dice, IoU and PA of 92.3%, There was a certain increase in 86.7% and 95.8%. Among the three-dimensional indicators, the RVD and RSSD of 0.026 and 0.079 in the method of this paper are significantly lower than UNET's 0.042 and 0.191. Conclusions The  performance  of  liver  segmentation  can  be  improved  by  performing production confrontation training on UNET network and introducing distance constraint function in the training process, and the segmentation can be used in computer-aided diagnosis systems.

參考文獻(xiàn):

[1] ?楊昌俊,楊新.基于圖割與快速水平集的腹部 CT 圖像分割[J].CT 理論與應(yīng)用研究, 2011, 20(3):291-300.

Yang CJ, Yang X. Abdominal CT image segmen-tation based on graph cuts and fast level set[J].CT Theory and Applications,2011,20(3):291-300.

[2] Nakayama Y, Li Q, Katsuragawa S, et al. Automated hepatic volumetry for living related liver transplantation at multisection CT[J]. Radiology,2006, 240(3):743–748.

[3] Kallini JR , Gabr A , Salem R , et al. Transarterial radioembolization with yttrium-90 for the? treatment of hepatocellular carcinoma[J]. Advances in Therapy, 2016, 33(5):699-714.

[4] 李成剛.三維重建與虛擬現(xiàn)實技術(shù)在肝臟外科的應(yīng)用[J].世界華人消化雜志,2020,28(13):515-518.

??? Li CG. Application of three-dimensional reconstruction and virtual reality technology in liver surgery[J].World Chinese Journal of Digestology,2020,28(13):515-518.

[5] Berzigotti A, Abraldes J G, Tandon P, et al. Ultrasonographic evaluation of liver surface and transient elastography in clinically doubtful cirrhosis[J]. Journal of Hepatology, 2010, 52(6): 846-853.

[6] 趙于前,閆桂霞,徐效文,等.基于先驗信息水平集方法的肝臟CT序列圖像自動分割[J].中南大學(xué)學(xué)報(自然科學(xué)版), 2015, 46(4): 1310-1317.

Zhao YQ, Yan GX, Xu XW, et al. Automatic segmentation of livers from CT series based on level set method with prior knowledge[J]. Journal of Central South University(Science and Technology), 2015, 46(4): 1310-1317.

[7] ?吳健, 崔志明, 葉峰, 等. 基于輪廓形狀的CT斷層圖像插值[J]. 計算機(jī)應(yīng)用與軟件, 2008, 25(11): 63-65.

?Wu J, Cui ZM, Ye F, et al.Interpolation based on contour shape for CT faulted images[J].Computer Applications and Software,2008, 25(11):63-65.

[8] ?Lu XQ, Wu JS, Ren XY, et al. The study and application of the improved region growing algorithm for liver segmentation[J]. Optik, 2014, 125(9): 2142-2147.

[9] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651.

[10] Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495.

[11]Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495.

[12] Li XM, Chen H, Qi XJ, et al. H-DenseUNet: hybrid densely connected unet for liver and tumor segmentation from CT volumes[J]. IEEE Transactions on Medical Imaging, 2018, 37(12): 2663-2674.

[13] Goodfellow IJ, Pouget-Abadie J, Mirza M , et al. Generative adversarial networks[J]. Advances in Neural Information Processing Systems, 2014, 3: 2672-2680.

[14] Kim HJ, Lee D. Image denoising with conditional generative adversarial networks (CGAN) in low dose chest images[J]. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2020, 954: 161914.

[15] Zhao D, Zhu D, Lu J, et al. Synthetic medical images using F&BGAN for improved lung nodules classification by multi-scale VGG16[J]. Symmetry, 2018, 10(10): 519.

[16] Zhou JY, Wong DW, Ding F, et al. Liver tumour segmentation using contrast-enhanced multi-detector CT data: performance benchmarking of three semiautomated methods[J]. European Radiology, 2010, 20(7): 1738-1748.

[17] Li Q, Song H, Chen L, et al. An overview of abdominal multi-organ segmentation[J]. Current Bioinformatics, 2020, 15(8): 866-877.

[18] Schindl MJ, Redhead DN, Fearon KC, et al. The value of residual liver volume as a predictor of hepatic dysfunction and infection after liver resection[J]. Gut, 2005, 54(2): 289-296.

[19] 劉博. 基于深度卷積神經(jīng)網(wǎng)絡(luò)的肝圖像分割方法研究[D].哈爾濱: 哈爾濱工業(yè)大學(xué),2019.

??? Liu B. Research on liver image segmentation based on deep convolutional neural network[D]. Harbin: Harbin Institute of Technology, 2019.

[20]李慶勃. 基于深度學(xué)習(xí)的肝臟腫瘤CT影像分割方法研究[D].西安:長安大學(xué),2019.

Li QB. Research on CT image segmentation of liver tumor based on deep learning[D].Xi’an: Chang’an University, 2019.

[21] 黃泳嘉,史再峰,王仲琦,等.基于混合損失函數(shù)的改進(jìn)型U-Net肝部醫(yī)學(xué)影像分割方法[J].激光與光電子學(xué)進(jìn)展,2020,57(22): 221003.

??? Huang YJ, Shi ZF, Wang ZQ, et al.Improved U-Net based on mixed loss functionfor liver medical image segmentation[J].Laser and Optoelectronics Progress,2020,57(22): 221003.

[22] 鄧青松,吳傳新,龔建平.3D打印技術(shù)在臨床教學(xué)中的應(yīng)用和效果[J].醫(yī)學(xué)教育管理,2019,5(6):531-535.

Deng QS, Wu CX, Gong JP. Application and effect of 3D printing technology in clinical teaching[J].Medical Education Management,2019,5(6):531-535.

?

服務(wù)與反饋:
文章下載】【加入收藏
提示:您還未登錄,請登錄!點此登錄
 
友情鏈接  
地址:北京安定門外安貞醫(yī)院內(nèi)北京生物醫(yī)學(xué)工程編輯部
電話:010-64456508  傳真:010-64456661
電子郵箱:[email protected]