51黑料吃瓜在线观看,51黑料官网|51黑料捷克街头搭讪_51黑料入口最新视频

設(shè)為首頁 |  加入收藏
首頁首頁 期刊簡介 消息通知 編委會(huì) 電子期刊 投稿須知 廣告合作 聯(lián)系我們
基于全卷積神經(jīng)網(wǎng)絡(luò)的直腸癌腫瘤磁共振影像自動(dòng)分割方法

Automatic segmentation method based on full convolution neural network for rectal cancer tumors in magnetic resonance image

作者: 冉昭  簡俊明  王蒙蒙  趙星羽  高欣 
單位:中國科學(xué)技術(shù)大學(xué)(合肥 230026) 中國科學(xué)院蘇州生物醫(yī)學(xué)工程技術(shù)研究所(蘇州215163)
關(guān)鍵詞: 直腸腫瘤分割;  神經(jīng)網(wǎng)絡(luò);  多邊輸出;  磁共振影像 
分類號:R318.04
出版年·卷·期(頁碼):2019·38·5(465-471)
摘要:

目的 直腸腫瘤(rectum cancer, RC)的圖像精確分割是直腸癌診斷和治療的基礎(chǔ)和關(guān)鍵。目前,直腸腫瘤的分割通常是由放射科醫(yī)生逐切片進(jìn)行,這種方式主觀性強(qiáng),工作量大。為此,本文提出了一種直腸腫瘤磁共振影像全自動(dòng)分割網(wǎng)絡(luò),在有效減少放射科醫(yī)生負(fù)擔(dān)的同時(shí)提高了腫瘤分割結(jié)果的可重復(fù)性。方法 首先采用一個(gè)預(yù)訓(xùn)練的ResNet50提取特征,并在網(wǎng)絡(luò)隱藏層添加三個(gè)邊輸出模塊,實(shí)現(xiàn)圖像數(shù)據(jù)的多尺度特征提取,最后融合三個(gè)邊輸出模塊獲得最終的分割結(jié)果。將所提網(wǎng)絡(luò)架構(gòu)的分割結(jié)果與基于U-net網(wǎng)絡(luò)架構(gòu)的分割結(jié)果進(jìn)行比較,并分析不同損失函數(shù)和感興趣區(qū)域(region of interest, ROI)尺寸對所提網(wǎng)絡(luò)分割性能的影響。結(jié)果 本研究使用中山大學(xué)附屬第六醫(yī)院512例患者的影像數(shù)據(jù)對模型進(jìn)行訓(xùn)練及測試,其中隨機(jī)選取的461名患者的T2加權(quán)磁共振影像用于網(wǎng)絡(luò)訓(xùn)練,剩下51名患者的T2加權(quán)磁共振影像用于網(wǎng)絡(luò)測試。結(jié)果表明,所提網(wǎng)絡(luò)分割結(jié)果的平均Dice相似性系數(shù)(Dice similarity coefficient, DSC)、平均敏感度(sensitivity)、平均特異度(specificity)及平均豪斯多夫距離(Hausdorff distance, HD)分別達(dá)到了83.61%、89.10%、96.36%和8.49,均優(yōu)于基于U-net的分割方法。對于包含了腫瘤組織的ROI,尺寸越小,分割效果越好。對于給定尺寸的ROI,幾種損失函數(shù)并無太大差異。結(jié)論 該算法能夠準(zhǔn)確地勾畫腫瘤邊界,將有助于提升醫(yī)生工作效率。

Objective Accurate segmentation of rectal tumors form images is a basic and crucial task for diagnosis and treatment of rectal cancer. Currently, the segmentation of rectal tumors is usually performed by radiologists on a slice-by-slice basis, which is highly subjective and requires a large amount of work. Therefore, this paper proposes an automatic segmentation network for magnetic resonance image of rectal tumors, which can not only effectively reduce the burden of radiologists, but also improved the repeatability of tumor segmentation results. Methods A pre-trained Resnet50 model was introduced for feature extraction, and three side-output modules were added to the hidden layer of Resnet50 to guide multi-scale feature learning. The final boundaries of tumor were determined by the fusion of the predictions from side-output modules. The proposed model was compared with a U-net based model, and the impacts of different region of interest (ROI) sizes and loss functions were also evaluated. Results We trained and evaluated the models on the data of 512 patients from Sixth Affiliated Hospital of Sun Yat-sen University (Guangzhou, China), in which the T2-weighted magnetic resonance images (T2W-MRIs) of 461 patients were randomly selected for model training, while the T2W-MRIs of the remain 51 patients were used for performance evaluation. The proposed model was superior to the U-net based model and achieved an average Dice similarity coefficient of 83.61%, an average sensitivity of 89.10%, an average specificity of 96.36%, and an average Hausdorff distance of 8.49. In addition, when the ROI contained rectal tumor tissue, the smaller the ROI size was, the higher the segmentation accuracy would be. For a certain ROI size, there were no significant differences in segmentation results among these loss functions. Conclusion The proposed network can accurately delineate the tumor boundaries and could help improve physician productivity.

參考文獻(xiàn):

[1]     Parkin DM, Pisani P, Ferlay J. Global cancer statistics [J]. CA: A Cancer Journal for Clinicians, 1999, 49(1): 33-64.

[2]   陳萬青, 張思維, 曾紅梅, 等. 中國2010年惡性腫瘤發(fā)病與死亡 [J]. 中國腫瘤, 2014, 23(1): 1-10.

    Chen WQ, Zhang SW, Zeng HM, et al. Report of cancer incidence and mortality in China, 2010 [J]. China Cancer, 2014, 23(1): 1-10.

[3]   李國東, 董新舒, 劉明. 直腸癌外科治療進(jìn)展 [J]. 中華腫瘤防治雜志, 2015, 22(5): 402-406.

    Li GD, Dong XS, Liu M. Progress of surgical treatment in rectal carcinoma [J]. Chinese Journal of Cancer Prevention and Treatment, 2015, 22(5): 402-406.

[4]   Dou Q, Yu L, Chen H, et al. 3D deeply supervised network for automated segmentation of volumetric medical images [J]. Medical Image Analysis, 2017, 41:40-54.

[5]   Soomro MH, Giunta G, Laghi A, et al. Segmenting MR Images by Level-Set Algorithms for Perspective Colorectal Cancer Diagnosis [C]//Proceedings of the European Congress on Computational Methods in Applied Sciences and Engineering. Berlin :Springer, 2017: 396-406.

[6]   van Heeswijk MM, Lambregts DM, van Griethuysen JJ, et al. Automated and semiautomated segmentation of rectal tumor volumes on diffusion-weighted MRI: can it replace manual volumetry? [J]. International Journal of Radiation Oncology Biology Physics, 2016, 94(4): 824-831.

[7]   Irving B, Cifor A, Papie? BW, et al. Automated colorectal tumour segmentation in DCE-MRI using supervoxel neighbourhood contrast characteristics [C]//Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention.Berlin:Springer, 2014: 609-616.

[8]   Trebeschi S, van Griethuysen JJ, Lambregts DM, et al. Deep learning for fully-automated localization and segmentation of rectal cancer on multiparametric MR [J]. Scientific reports, 2017, 7(1): 5301.

[9]   Huang YJ, Dou Q, Wang ZX, et al. HL-FCN: Hybrid loss guided FCN for colorectal cancer segmentation [C]//Proceedings of the Biomedical Imaging (ISBI 2018). Wahington DC:2018 IEEE 15th International Symposium on, IEEE, 2018: 195-198.

[10] Wang J, Lu J, Qin G, et al. Technical Note: A deep learning-based autosegmentation of rectal tumors in MR images [J]. Medical physics, 2018, 45(6): 2560-2564.

[11] Jian J, Xiong F, Xia W, et al. Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images [J]. Australasian Physical & Engineering Sciences in Medicine, 2018, 41(2): 393-401.

[12] Cheng R, Lay N, Mertan F, et al. Deep learning with orthogonal volumetric HED segmentation and 3D surface reconstruction model of prostate MRI [C]//Proceedings of the Biomedical Imaging (ISBI 2017). Wahington DC:2017 IEEE 14th International Symposium on, IEEE, 2017: 749-753.

[13] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]//Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2016: 770-778..

[14] Du X, El-Khamy M, Lee J, et al. Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection [C]//Proceedings of the Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, IEEE, 2017: 953-961

[15] Peng C, Zhang X, Yu G, et al. Large kernel matters—improve semantic segmentation by global convolutional network [C]//Proceedings of the Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, IEEE, 2017: 1743-1751.

[16] Seferbekov S, Iglovikov V, Buslaev A, et al. Feature pyramid network for multi-class land segmentation [C]//Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018.

[17] Zhou Y, Zhu Y, Ye Q, et al. Weakly supervised instance segmentation using class peak response[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 3791-3800.

[18] Huang L, Xia W, Zhang B, et al. MSFCN-multiple supervised fully convolutional networks for the osteosarcoma segmentation of CT images [J]. Computer Methods and Programs in Biomedicine, 2017, 143:67-74.

[19] Xie S, Tu Z. Holistically-nested edge detection [C]//Proceedings of the Proceedings of the IEEE International Conference on Computer Vision, 2015: 1395-1403.

[20] Milletari F, Navab N, Ahmadi SA. V-net: Fully convolutional neural networks for volumetric medical image segmentation [C]//Proceedings of the 3D Vision (3DV), 2016 Fourth International Conference on, IEEE, 2016: 565-571.

[21] Chollet F. Keras[EB/OL]. 2015-06-27. https://keras.io.

[22] Kingma DP, Ba J. Adam: a method for stochastic optimization [C] San Diego:the 3rd International Conference for Learning Representations, 2015:1-5.

[23] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural   networks [C]//Proceedings of the Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010: 249-256.

[24] Taha A A, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool[J]. BMC medical imaging, 2015, 15(1): 29.

[25] Cardenes R, de Luis-Garcia R, Bach-Cuadra M. A multidimensional segmentation evaluation for medical image data [J]. Computer Methods and Programs in Biomedicine, 2009, 96(2): 108-124.

[26] Veropoulos K, Campbell C, Cristianini N. Controlling the sensitivity of support vector machines [C]//Proceedings of the Proceedings of the International Joint Conference on AI, 1999: 60.

服務(wù)與反饋:
文章下載】【加入收藏
提示:您還未登錄,請登錄!點(diǎn)此登錄
 
友情鏈接  
地址:北京安定門外安貞醫(yī)院內(nèi)北京生物醫(yī)學(xué)工程編輯部
電話:010-64456508  傳真:010-64456661
電子郵箱:[email protected]