51黑料吃瓜在线观看,51黑料官网|51黑料捷克街头搭讪_51黑料入口最新视频

設(shè)為首頁 |  加入收藏
首頁首頁 期刊簡介 消息通知 編委會 電子期刊 投稿須知 廣告合作 聯(lián)系我們
基于卷積神經(jīng)網(wǎng)絡(luò)和圖像顯著性的心臟CT圖像分割

Cardiac CT image segmentation based on convolutional neural network andimage saliency

作者: 趙飛  劉杰 
單位:北京交通大學(xué)生物醫(yī)學(xué)工程系(北京 100044)
關(guān)鍵詞: 心臟CT圖像;圖像分割;卷積神經(jīng)網(wǎng)絡(luò);圖像顯著性 
分類號:R318.04
出版年·卷·期(頁碼):2020·39·1(48-55)
摘要:

目的 心臟醫(yī)學(xué)影像中,感興趣部分的提取與分割是診斷心臟病變部位的關(guān)鍵。由于心臟舒張、收縮以及血液的流動,心臟CT圖像易出現(xiàn)弱邊界、偽影,傳統(tǒng)分割算法易產(chǎn)生過度分割的情況。為此,提出一種基于卷積神經(jīng)網(wǎng)絡(luò)和圖像顯著性的心臟CT圖像分割方法。方法 采用卷積神經(jīng)網(wǎng)絡(luò)對目標(biāo)區(qū)域進(jìn)行定位,濾除肋骨、肌肉等造影對比不明顯部分,截取出感興趣區(qū)域,結(jié)合感興趣區(qū)域的對比度,計算并提高感興趣區(qū)域的心臟組織的顯著值,通過獲得的顯著值圖像截取心臟圖像,并與區(qū)域生長算法的分割結(jié)果進(jìn)行對比。最后使用泰州人民醫(yī)院11例患者的影像數(shù)據(jù)對算法模型進(jìn)行訓(xùn)練和測試,其中隨機(jī)選擇9例用于訓(xùn)練,剩余2例用于測試。結(jié)果 所提算法模型在心底、心中、心尖三個心臟分段的分割正確率分別達(dá)到了92.79%、92.79%、94.11%,均優(yōu)于基于區(qū)域生長的分割方法。結(jié)論 基于卷積神經(jīng)網(wǎng)絡(luò)和圖像顯著性的分割方法能夠準(zhǔn)確獲取心臟的外圍輪廓,輪廓邊緣更加平滑,完全能夠滿足CT圖像序列的心臟全自動分割任務(wù)需求,分割后的圖像更加有利于醫(yī)生對患者心臟健康狀況和病變部位的觀察。

Objective  Extraction and segmentation of interest in cardiac medical imaging is the key to diagnosis of heart disease. As the heart dilation, contraction and blood flowing, cardiac CT images prone to weakboundaries, artifacts, the traditional segmentation algorithm is easy to produce over-segmentation. To this end, this paper presents a CT image segmentation method based on image saliency. Methods This method uses a convolutional neural network to locate the target area, filter out inconspicuous parts such as ribs and muscles, extract the area of interest, combine the contrast of the area of interest, then calculate and improve the saliency value of the heart tissue in the area of interest. The heart image is intercepted from the obtained saliency value image and compared with the segmentation results of the region growing algorithm. Finally,  image data of 11 patients from Taizhou People's Hospital are used to train and test the algorithm model, of which 9 cases are randomly selected for training and the remaining 2 cases are used for testing. Results The experimental results show that the segmentation accuracy of the proposed algorithm model in the bottom, middle, apex of the heart is 92.79%, 92.79%, and 94.11%, which are better than the segmentation method based on region growth. Conclusions The segmentation method based on convolutional neural network and image saliency can accurately obtain the peripheral contour of the heart, and the contour edges are smoother, and can fully meet the needs of the automatic heart segmentation task of CT image sequences, the segmented images are more conducive for doctors to observe the patient's heart health and lesions.

參考文獻(xiàn):

[1] 付增良, 陳曉軍, 葉銘,等. 心臟CT圖像分割方法[J]. 計算機(jī)工程, 2009, 35(12):189-191.

Fu ZL, Chen XJ, Ye M, et al .Segmentation of cardiac CT images [J]. Computer Engineering, 2009, 35 (12): 189-191

[2] Karthikeyan C, Ramadoss B. Segmentation algorithm for CT images using morphological operation and artificial neural network[J]. International Journal of Computer Theory and Engineering, 2011 3(4): 561-564 .

[3] 高麗, 楊樹元, 夏杰,等. 基于標(biāo)記的Watershed圖像分割新算法[J]. 電子學(xué)報, 2006, 34(11):2018-2023.

Ga L, Yang SY, Xia J, et al. New markers-based Watershed image segmentation algorithm [J] .Journal of Electronics, 2006, 34 (11): 2018-2023.

[4] 張建煒, 林江莉, 李德玉,等. 基于形態(tài)學(xué)重構(gòu)的超聲醫(yī)學(xué)圖像濾波方法[J]. 生物醫(yī)學(xué)工程學(xué)雜志, 2007, 24(3):481-484.

Zhang JW, Lin JL, Li DY, et al. Ultrasound medical image filtering method based on morphological reconstruction [J]. Biomedical Engineering, 2007, 24 (3): 481-484.

[5]  商艷麗. 基于形態(tài)學(xué)重構(gòu)運(yùn)算的醫(yī)學(xué)CT圖像濾波方法[J]. 中國體視學(xué)與圖像分析, 2011,16(1):103-107.

Shang YL.Medical CT Image Filtering Method Based on Morphological Reconstruction [J]. Chinese Journal of Stereology and Image Analysis, 2011,16 (1): 103-107

[6] Lecun Y, Kavukcuoglu K, Farabet C. Convolutional networks and applications in vision [C]// Circuits andSystems (ISCAS), Proceedings of 2010 IEEEInternational Symposium on. USA: IEEE, 2010: 253-256. .

[7] Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Processing Magazine, 2012, 29(6):82-97.

[8] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 79(10):1337-1342.

[9] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Advances in Neural Information Processing Systems, 2012, 25(2):2012.

[10] Comer HT, Draper BA. Interest point stability prediction[C]// International conference on computer vision systems: computer vision systems. Berlin:Springer-Verlag, 2009:315-324.

[11] Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection.[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2010, 34(10):1915-1926.

[12] Liu T, Yuan Z, Sun J, et al. Learning to detect a salient object[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011, 33(2):353-367.

[13] Cheng M M, Zhang G X, Mitra N J, et al. Global contrast based salient region detection[C]// Computer Vision and Pattern Recognition. Colorado Springs, Colorado:IEEE, 2015:409-416.

[14] Boureau YL, Ponce J, Lecun Y. A theoretical analysis of feature pooling in visual recognition[C]// International Conference on Machine Learning. San Diego:DBLP, 2010:111-118.

[15] Boureau YL, Ponce J, Lecun Y. A theoretical analysis of feature pooling in visual recognition[C]// Proceedings of the 27th International Conference on Machine Learning . Haifa, Israel :ICML-10,2010: 111-118.

[16] Kim KJ, Cho SB. Diverse evolutionary neural networks based on information theory[M].Berlin: Springer-Verlag, 2008:1007-1016.

[17] Avendi MR, Kheradvar A, Jafarkhani H. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI[J]. Computer Science, 2015, 30:108–119.

[18] Achanta R, Shaji A, Smith K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274-2282.

[19] Criminisi A, Sharp T, Rother C, et al. Geodesic image and video editing[J]. Acm Transactions on Graphics, 2010, 29(5):885-900.

[20] Ma YF, Zhang HJ. Contrast-based image attention analysis by using fuzzy growing[C]// Eleventh ACM International Conference on Multimedia, Berkeley:ACM,2003:374-381.

[21] Achanta R, Estrada F, Wils P, et al. Salient region detection and segmentation[C]// International Conference on Computer Vision Systems. Berlin:Springer-Verlag, 2008:66-75.

[22] Dolson J, Baek J, Plagemann C, et al. Upsampling range data in dynamic environments[C]// IEEE Conference on Computer Vision & Pattern Recognition. Minneapolis, USA:IEEE, 2010:1141-1148.

[23] Perazzi F, Krahenbuhl P, Pritch Y, et al. Saliency filters: Contrast based filtering for salient region detection[J].  IEEE Conference on Computer Vision and Pattern Recognition ,2012, 157(10):733-740.

[24] 張石, 董建威, 佘黎煌. 醫(yī)學(xué)圖像分割算法的評價方法[J]. 中國圖象圖形學(xué)報, 2009, 14(9):1872-1880..

Zhang S, Tong JW, Shi LH.Advanced Z method for medical image segmentation algorithm [J] .Journal of Image and Graphics, 2009, 14 (9): 1872-1880.

服務(wù)與反饋:
文章下載】【加入收藏
提示:您還未登錄,請登錄!點(diǎn)此登錄
 
友情鏈接  
地址:北京安定門外安貞醫(yī)院內(nèi)北京生物醫(yī)學(xué)工程編輯部
電話:010-64456508  傳真:010-64456661
電子郵箱:[email protected]