基于關(guān)鍵點(diǎn)的服裝檢索
發(fā)布時(shí)間:2018-07-12 08:46
本文選題:關(guān)鍵點(diǎn) + 深度卷積神經(jīng)網(wǎng)絡(luò)。 參考:《計(jì)算機(jī)應(yīng)用》2017年11期
【摘要】:目前,同款或近似款式服裝檢索主要分為基于文本和基于內(nèi)容兩類。基于文本算法往往需要海量標(biāo)注樣本,且存在人工主觀性帶來的標(biāo)注缺失和標(biāo)注差異等問題;基于內(nèi)容算法一般對(duì)服裝圖像的顏色、形狀、紋理提取特征,進(jìn)行相似性度量,但難以應(yīng)對(duì)背景顏色干擾,以及視角、姿態(tài)引起的服裝形變等問題。針對(duì)上述問題,提出一種基于關(guān)鍵點(diǎn)的服裝檢索方法。利用級(jí)聯(lián)深度卷積神經(jīng)網(wǎng)絡(luò)為基礎(chǔ),定位服裝關(guān)鍵點(diǎn),融合關(guān)鍵點(diǎn)區(qū)域低層視覺信息以及整幅圖像的高層語義信息。對(duì)比傳統(tǒng)檢索方法,所提算法能有效處理視角、姿態(tài)引起的服裝形變和復(fù)雜背景的干擾;同時(shí)不需大量樣本標(biāo)定,且對(duì)背景、形變魯棒。在Fashion Landmark數(shù)據(jù)集和BDAT-Clothes數(shù)據(jù)集上與常用算法進(jìn)行對(duì)比實(shí)驗(yàn)。實(shí)驗(yàn)結(jié)果表明所提算法能有效提升檢索的查準(zhǔn)率和查全率。
[Abstract]:At present, the same style or similar style clothing retrieval is divided into two categories: text-based and content-based. Text based algorithms often need a large number of tagged samples, and there are some problems such as missing annotation and annotation differences caused by artificial subjectivity. Based on content algorithm, the color, shape and texture features of clothing images are generally extracted, and the similarity is measured. However, it is difficult to deal with the background color interference, as well as the angle of view, posture caused by the deformation of clothing and so on. In order to solve the above problems, a key point based clothing retrieval method is proposed. Based on cascaded deep convolution neural network, the key points of clothing are located, and the low-level visual information of the key points and the high-level semantic information of the whole image are fused. Compared with the traditional retrieval method, the proposed algorithm can effectively deal with the disturbance of garment deformation and complex background caused by visual angle and posture, and it does not require a large number of samples to calibrate, and is robust to background and deformation. The experiments are carried out on Fashion Landmark dataset and BDAT-clothes dataset with common algorithms. Experimental results show that the proposed algorithm can effectively improve the precision and recall of retrieval.
【作者單位】: 江蘇省大數(shù)據(jù)分析技術(shù)重點(diǎn)實(shí)驗(yàn)室(南京信息工程大學(xué));
【基金】:國(guó)家自然科學(xué)基金資助項(xiàng)目(61622305,61502238,61532009) 江蘇省自然科學(xué)基金資助項(xiàng)目(BK20160040)~~
【分類號(hào)】:TP183;TP391.41
【相似文獻(xiàn)】
相關(guān)期刊論文 前1條
1 惠國(guó)保;李東波;;像素聚類改進(jìn)二進(jìn)制描述子魯棒性[J];中國(guó)圖象圖形學(xué)報(bào);2014年04期
相關(guān)碩士學(xué)位論文 前1條
1 李坤;一種基于關(guān)鍵點(diǎn)和體素聯(lián)合的交互頭部三維重建方法[D];天津大學(xué);2016年
,本文編號(hào):2116611
本文鏈接:http://sikaile.net/kejilunwen/zidonghuakongzhilunwen/2116611.html
最近更新
教材專著