51黑料吃瓜在线观看,51黑料官网|51黑料捷克街头搭讪_51黑料入口最新视频

設(shè)為首頁 |  加入收藏
首頁首頁 期刊簡(jiǎn)介 消息通知 編委會(huì) 電子期刊 投稿須知 廣告合作 聯(lián)系我們
基于TripletLoss損失函數(shù)的舌象分類方法研究

Tongue image classification based on TripletLoss metric

作者: 孫萌  張新峰 
單位:北京工業(yè)大學(xué)信息學(xué)部(北京 100124)
關(guān)鍵詞: 腫瘤;  舌象;  分類;  深度學(xué)習(xí);  Tripletloss;  FaceNet 
分類號(hào):R318.04
出版年·卷·期(頁碼):2020·39·2(131-137)
摘要:

目的 舌象體質(zhì)分類對(duì)后續(xù)腫瘤患者舌象的客觀化辨證具有重要意義,但對(duì)于中醫(yī)舌圖像而言,部分類型的舌圖像樣本較難采集,達(dá)不到目前流行的深度學(xué)習(xí)方法需要的樣本數(shù)量,且基于傳統(tǒng)分類的深度學(xué)習(xí)只注重尋找具有相似特征,導(dǎo)致模型在中醫(yī)舌圖像這種類間樣本特征差異較小的問題上,分類性能不佳。因此,本文提出一種基于Tripletloss的度量分類方法,在最大化非同類樣本的特征距離同時(shí)縮小類間樣本特征的間距。方法 首先通過建立卷積神經(jīng)網(wǎng)絡(luò)Inception-ResNet-V1提取對(duì)應(yīng)的高維抽象特征。然后使用L2范數(shù)進(jìn)一步約束高維特征的分布,同時(shí)引入降維壓縮后的高維特征,最后使用TripletLoss得到有效的映射空間。因此可以根據(jù)舌象間的特征向量距離計(jì)算相似度以實(shí)現(xiàn)分類。結(jié)果 經(jīng)過本文方法得到的特征空間,不同類型舌象之間的距離較大,同一類型的舌象距離較小,可以更好地對(duì)類間差異較小的舌圖像進(jìn)行分類,且分類速度更快。與現(xiàn)有方法比較,本論文方法在分類精確度上提升了18.34%,并且所需時(shí)間最短。結(jié)論 該方法可以很好地實(shí)現(xiàn)舌象體質(zhì)分類,具有一定的應(yīng)用價(jià)值。

Objective Constitution classification based on tongue image is of great significance to the objective differentiation of tongue image of subsequent tumor patients. However, for tongue images of traditional Chinese medicine, certain types of tongue image samples are difficult to collect, which can not meet the number of samples required by the current popular deep learning methods. Moreover, deep learning based on traditional classification only focuses on finding similar features, resulting in poor classification performance of the model on the problem of small differences in sample features between classes, such as tongue images of traditional Chinese medicine. Therefore, this paper proposes a metric classification method based on tripletloss, which maximizes the feature distance of different classes of samples while reducing the distance between classes of samples. Methods Firstly, the corresponding high-dimensional abstract features are extracted by establishing convolution neural network Inception-ResNet-V1. Then L2 norm is used to further constrain the distribution of high-dimensional features. Meanwhile, the high-dimensional features after data dimensionality reduction and compression are introduced. Finally, a valid mapping space is obtained by using Tripletloss. Therefore, the similarity can be calculated according to the feature vector distance between tongue images to realize classification.  Results According to the feature space obtained by the method in this paper, the distance between different types of tongue images is larger, and the distance between the same type of tongue images is smaller, which can better classify tongue images with smaller differences between classes, and the classification speed is faster. Compared with the existing methods, the classification accuracy of the method in this paper is improved by 18.34%, and the required time is the shortest. Conclusions This method can well realize the constitution classification based on tongue image and has certain application value.

參考文獻(xiàn):

[1]楊曉蕾,楊超,張欽婷,孫權(quán),葛杰.惡性腫瘤患者中醫(yī)體質(zhì)類型相關(guān)研究[J].遼寧中醫(yī)藥大學(xué)學(xué)報(bào),2015,17(8):164-166. Yang XL, Yang C, Zhang QT, et al. Malignant tumor patients with traditional chinese medicine constitution type of related research[J].Journal of Liaoning University of Traditional Chinese Medicine,2015,17(8): 164-166.

[2]中國中西醫(yī)結(jié)合研究會(huì)腫瘤專業(yè)委員會(huì)中醫(yī)診斷協(xié)作組.4417例癌癥患者舌象臨床觀察[J].浙江中醫(yī)雜志, 1992, 37 (8): 368-369.

[3] 錢峻, 劉沈林.消化系惡性腫瘤舌象辨治探微[J].吉林中醫(yī)藥, 2005, 25 (12) :1-2.

[4]Litjens G, Kooi T, Bejnordi BE , et al. A survey on deep learning in medical image analysis[J]. Medical Image Analysis, 2017, 42: 60-88.

[5]Rawat W, Wang Z. Deep convolutional neural networks for image classification: a comprehensive review[J]. Neural Computation, 2017, 29(9): 2352-2449.

[6]劉飛, 張俊然, 楊豪. 基于深度學(xué)習(xí)的醫(yī)學(xué)圖像識(shí)別研究進(jìn)展[J].中國生物醫(yī)學(xué)工程學(xué)報(bào),2018,37(1):86-94. Liu F,ZhangJR,YangH. Research progress of medical image recognition based on deep learning[J]. Chinese Journal of Biomedical Engineering,2018,37(1):86-94.

[7]LeCunY, BengioY, Hinton G. Deep learning[J]. Nature,2015,521(7553):436-444.

[8] Brown GW. On small-sample estimation[J].The Annals of Mathematical Statistics,1947, 18(4): 582-585.

[9] Koch G, Zemel R, Salakhutdinov R. Siamese neural networks for one-shot image recognition[C]// International Conference on Machine Learning. Lille France:JMLR, W CP, 2015, 37.

[10]Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning[C]//Advances in Neural Information Processing Systems. Long Beach,America: NIPS, 2017: 4077-4087.

[11] Ravi S,LarochelleH. Optimization as a model for few-shot learning[C]//International Conference on Learning Representations (ICLR).Toulon, France: ICLR, 2017.

[12]張新峰, 沈蘭蓀. 加權(quán)SVM在中醫(yī)舌象分類與識(shí)別中的應(yīng)用研究[J]. 中國生物醫(yī)學(xué)工程學(xué)報(bào), 2006, 25(2):230-233. Zhang XF, Shen LS. Application of weighted SVM on the classification and recognition of tongue images[J]. Chinese Journal of Biomedical Engineering,2006, 25(2):230-233.

[13] 胡繼禮, 闞紅星. 基于卷積神經(jīng)網(wǎng)絡(luò)的舌象分類[J]. 安慶師范大學(xué)學(xué)報(bào)(自然科學(xué)版),2018,24(4):44-49. Hu JL, Kan HX.Tongue classification based on convolutional neural network[J]. Journal of Anqing Normal University(Natural Science Edition),2018, 24(4): 44-49.

[14] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA:IEEE Press,2016: 770-778.

[15]Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, Massachusetts,USA: IEEE Press, 2015: 1-9.

[16]Lin, Min, Chen, Qiang, Yan, Shuicheng. Network In Network[J]. Computer Science, 2013.

[17]Amato G, Falchi F. kNN based image classification relying on local feature similarity[C]// Third International Workshop on Similarity Search and Applications. Istanbul, Turkey:SISAP, 2010: 101-108.

[18]Bottou L. Stochastic gradient learning in neural networks[J]. Proceedings of Neuro-N?mes, 1991,91(8): 12.

[19]Wold S, Esbensen KH, Geladi P. Principal component analysis[J]. Chemometrics and Intelligent Laboratory Systems, 1987, 2(1-3):37-52.

[20] 王琦.中醫(yī)體質(zhì)學(xué)[M].北京:中國醫(yī)藥科技出版社, 1995.

[21]Yosinski J, Clune J, Bengio Y, et al. How transferable are features in deep neural networks?[C]//Advances in Neural Information Processing Systems(NIPS). Montreal, Canada:NIPS,2014: 3320-3328.

[22]Schroff F, Kalenichenko D, Philbin J. Facenet: a unified embedding for face recognition and clustering[C]// IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE Press, 2015: 815-823.

[23]LeCun Y, Boser BE, Denker J, et al. Handwritten digit recognition with a back-propagation network[C]//Advances in Neural Information Processing Systems. 1990: 396-404.

[24]楊晶東, 張朋. 基于遷移學(xué)習(xí)的全連接神經(jīng)網(wǎng)絡(luò)舌象分類方法[J]. 第二軍醫(yī)大學(xué)學(xué)報(bào), 2018, 39(8): 897-902. Yang JD, Zhang P. Tongue image classification method based on transfer learning and fully connected neural network[J]. Academic Journal of Second Military Medical University, 2018, 39(8): 897-902.

[25] Parkhi OM, Vedaldi A, Zisserman A. Deep face recognition[C]//British Machine Vision Conference. Swansea, UK: BMVC, 2015.

[26] Taigman Y, Yang M, Ranzato MA, et al. DeepFace: closing the gap to human-level performance in face veri?cation[C]// Conference on Computer Vision and Pattern Recognition. Columbus, USA:IEEE Press, 2014: 1701–1708.

服務(wù)與反饋:
文章下載】【加入收藏
提示:您還未登錄,請(qǐng)登錄!點(diǎn)此登錄
 
友情鏈接  
地址:北京安定門外安貞醫(yī)院內(nèi)北京生物醫(yī)學(xué)工程編輯部
電話:010-64456508  傳真:010-64456661
電子郵箱:[email protected]