天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁(yè) > 科技論文 > 軟件論文 >

基于機(jī)器視覺(jué)的人體狀態(tài)監(jiān)測(cè)關(guān)鍵技術(shù)研究

發(fā)布時(shí)間:2018-07-13 10:06
【摘要】:我國(guó)青少年近視發(fā)病率逐年遞增,其中不良坐姿是引起青少年近視的原因之一。在道路交通事故中,很多是由駕駛員注意力分散和疲勞駕駛引起的。針對(duì)以上兩個(gè)問(wèn)題,本文基于機(jī)器視覺(jué)對(duì)坐姿狀態(tài)和疲勞狀態(tài)的監(jiān)測(cè)進(jìn)行了研究。在人臉檢測(cè)的基礎(chǔ)上針對(duì)兩種不同場(chǎng)景分別提出了兩種不同的坐姿行為監(jiān)測(cè)方法和疲勞監(jiān)測(cè)方法,包括基于人臉膚色統(tǒng)計(jì)的坐姿判別、基于區(qū)域關(guān)鍵特征點(diǎn)匹配的頭部狀態(tài)判別、基于嘴巴活動(dòng)區(qū)域融合邊緣統(tǒng)計(jì)的打哈欠判別、基于人眼與瞳孔檢測(cè)的閉眼判別四種有效的監(jiān)測(cè)方法。并設(shè)計(jì)了"不良坐姿行為監(jiān)測(cè)系統(tǒng)"和"輔助駕駛中頭部狀態(tài)與疲勞監(jiān)測(cè)模擬系統(tǒng)"。首先,在不良坐姿行為監(jiān)測(cè)系統(tǒng)中,改進(jìn)了基于RGB彩色視頻的人臉檢測(cè)方法:采用基于膚色的人臉檢測(cè),能夠有效地減少人臉誤檢率;采用最大單目標(biāo)的人臉檢測(cè)方法,能檢測(cè)出主要目標(biāo)的人臉;提出了檢測(cè)窗口尺度自適應(yīng)的單目標(biāo)人臉檢測(cè)方法,利用前一幀中所檢測(cè)的單個(gè)主體目標(biāo)的尺寸,自適應(yīng)地調(diào)節(jié)檢測(cè)窗口的變化范圍,使得檢測(cè)速度得到了大大的提升。其次,提出了一種基于人臉膚色統(tǒng)計(jì)的坐姿監(jiān)測(cè)方法,該方法先根據(jù)檢測(cè)出的人臉框規(guī)劃出左、中、右三個(gè)膚色判別區(qū)域,然后通過(guò)對(duì)比這三個(gè)區(qū)域的膚色與正確姿態(tài)下的情況,來(lái)判別坐姿靠左/靠右;通過(guò)統(tǒng)計(jì)對(duì)比當(dāng)前與正確姿態(tài)下人臉框內(nèi)的膚色面積來(lái)判斷靠前/靠后。實(shí)驗(yàn)表明,在避免背景為膚色的前提下該方法對(duì)左右的檢測(cè)正確率為100%,對(duì)前后的檢測(cè)正確率為97.3%。針對(duì)駕駛員不良頭部狀態(tài)和疲勞狀態(tài)的監(jiān)測(cè),提出了基于主動(dòng)紅外視頻的三種判別方法:(1)基于區(qū)域關(guān)鍵特征點(diǎn)匹配的頭部狀態(tài)判別方法,分析正確姿態(tài)下的模板與實(shí)時(shí)監(jiān)測(cè)區(qū)域中的三對(duì)最佳匹配的SURF特征點(diǎn)的位置,從而判斷出當(dāng)前頭部狀態(tài)正確與否;(2)基于嘴巴活動(dòng)區(qū)域融合邊緣統(tǒng)計(jì)的打哈欠判別方法,統(tǒng)計(jì)表明,嘴巴幾乎活動(dòng)在人臉檢測(cè)框中下端的區(qū)域內(nèi),所以打哈欠的判別主要是在人臉框上規(guī)劃出嘴巴活動(dòng)區(qū)域,然后統(tǒng)計(jì)在該區(qū)域內(nèi)Prewitt與Canny算子融合邊緣縱向投影比判斷嘴巴開(kāi)合程度,并通過(guò)開(kāi)合程度來(lái)判斷打哈欠的狀態(tài);(3)基于人眼與瞳孔檢測(cè)的閉眼判別方法,根據(jù)人臉框規(guī)劃出眼睛的大概區(qū)域,在此基礎(chǔ)上能夠更好地定位眼睛,大大減少了全局檢測(cè)帶來(lái)的誤檢,同時(shí)提高了檢測(cè)效率和正確率,然后將檢測(cè)出來(lái)的眼睛適當(dāng)放大并做霍夫圓檢測(cè),能夠通過(guò)霍夫圓的存在與否判斷眼睛開(kāi)合狀態(tài)。實(shí)驗(yàn)結(jié)果表明,設(shè)計(jì)完成的"輔助駕駛中頭部狀態(tài)與疲勞監(jiān)測(cè)的模擬系統(tǒng)"中頭部狀態(tài)判別、打哈欠判別和眼睛疲勞判別模塊的正確率分別為:98.9%、100%、97.8%。
[Abstract]:The incidence of myopia is increasing year by year in China, among which poor sitting posture is one of the causes of juvenile myopia. In road traffic accidents, many of them are caused by driver's distraction and fatigue driving. In view of the above two problems, this paper studies the monitoring of sitting posture and fatigue based on machine vision. On the basis of face detection, two different sitting behavior monitoring methods and fatigue monitoring methods are proposed for two different scenes, including sitting-posture discrimination based on face skin color statistics. There are four effective monitoring methods: head state identification based on matching key feature points of region, yawning discrimination based on fusion edge statistics of active areas of mouth, and closed eyes discrimination based on human eyes and pupil detection. The monitoring system of bad sitting behavior and the simulation system of head state and fatigue monitoring in auxiliary driving are designed. First of all, the method of face detection based on RGB color video is improved in the bad sitting behavior monitoring system: the face detection based on skin color can effectively reduce the false detection rate of face, and the maximum single target face detection method is adopted. A single target face detection method based on window scale adaptive detection is proposed, which adaptively adjusts the range of detection window by using the size of a single object detected in the previous frame. The detection speed is greatly improved. Secondly, a sit-down monitoring method based on face skin color statistics is proposed, in which the left, middle and right skin color discriminant regions are planned according to the detected face frame. Then, by comparing the skin color of the three regions with the correct posture, the left / right side of the sitting position is judged, and the forward / backward position is judged by statistical comparison of the skin color area in the face frame under the current and the correct posture. The experimental results show that the accuracy rate of this method is 100 for left and right and 97.3 for front and rear without background skin color. Aiming at the monitoring of bad head state and fatigue state of driver, three discrimination methods based on active infrared video are proposed: (1) head state discrimination method based on regional key feature point matching; The position of three pairs of best matched SURF feature points in the real time monitoring region and the template under the correct posture are analyzed to determine whether the current head state is correct or not. (2) the yawning judgment method based on the edge statistics of the mouth active region fusion. Statistics show that the mouth almost moves in the lower end of the face detection box, so the yawning is mainly based on the planning of the mouth activity area on the face frame. Then, the longitudinal projection ratio of Prewitt and Canny operator is counted to judge the degree of mouth opening and closing, and the state of yawning is judged by the degree of opening and closing. (3) the method of judging the closed eyes based on the detection of human eyes and pupils. According to the face frame, the eye area can be mapped out, and the eyes can be located better, which greatly reduces the error detection brought by global detection, and improves the detection efficiency and accuracy. Then the detected eyes are enlarged properly and the Hoff circle is detected, which can judge the opening and closing state of the eyes by the existence or not of the Hoff circle. The experimental results show that the correct rates of head state discrimination, yawning discrimination and eye fatigue identification module in the "Simulation system for head condition and fatigue Monitoring in Auxiliary driving" are: 98.9% and 100% and 97.8%, respectively.
【學(xué)位授予單位】:西南科技大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41

【參考文獻(xiàn)】

相關(guān)期刊論文 前10條

1 孫挺;齊迎春;耿國(guó)華;;基于幀間差分和背景差分的運(yùn)動(dòng)目標(biāo)檢測(cè)算法[J];吉林大學(xué)學(xué)報(bào)(工學(xué)版);2016年04期

2 陳仁愛(ài);凌強(qiáng);徐駿;李峰;;基于DSP的實(shí)時(shí)圓檢測(cè)算法的設(shè)計(jì)實(shí)現(xiàn)與優(yōu)化[J];微型機(jī)與應(yīng)用;2016年11期

3 毛順兵;;Hough變換和輪廓匹配相結(jié)合的瞳孔精確檢測(cè)算法[J];計(jì)算機(jī)應(yīng)用;2016年05期

4 畢雪芹;惠婷;;基于膚色分割與AdaBoost算法的人臉檢測(cè)[J];國(guó)外電子測(cè)量技術(shù);2015年12期

5 劉哲宇;;豪雅·2015中國(guó)眼鏡零售業(yè)大型公益視光講座在武漢舉行[J];中國(guó)眼鏡科技雜志;2015年15期

6 陳健;鄭紹華;潘林;余輪;;結(jié)合橢圓擬合與參數(shù)傳遞的瞳孔檢測(cè)方法[J];儀器儀表學(xué)報(bào);2014年08期

7 段玉波;任璐;任偉建;霍鳳財(cái);;基于膚色分割和改進(jìn)AdaBoost算法的人臉檢測(cè)[J];電子設(shè)計(jì)工程;2014年12期

8 屈晶晶;辛云宏;;連續(xù)幀間差分與背景差分相融合的運(yùn)動(dòng)目標(biāo)檢測(cè)方法[J];光子學(xué)報(bào);2014年07期

9 袁國(guó)武;陳志強(qiáng);龔健;徐丹;廖仁健;何俊遠(yuǎn);;一種結(jié)合光流法與三幀差分法的運(yùn)動(dòng)目標(biāo)檢測(cè)算法[J];小型微型計(jì)算機(jī)系統(tǒng);2013年03期

10 韓亞偉;張有志;李慶濤;熊春彬;;幀差累積和減背景相結(jié)合的運(yùn)動(dòng)對(duì)象分割方法[J];計(jì)算機(jī)工程與應(yīng)用;2011年14期

相關(guān)博士學(xué)位論文 前1條

1 張偉;基于機(jī)器視覺(jué)的駕駛?cè)似跔顟B(tài)識(shí)別關(guān)鍵問(wèn)題研究[D];清華大學(xué);2011年

相關(guān)碩士學(xué)位論文 前6條

1 秦浩;智能視頻監(jiān)控平臺(tái)中區(qū)域目標(biāo)檢測(cè)和聯(lián)合跟蹤的設(shè)計(jì)與實(shí)現(xiàn)[D];南京郵電大學(xué);2016年

2 周鵬飛;駕駛員疲勞駕駛實(shí)時(shí)檢測(cè)系統(tǒng)設(shè)計(jì)與研究[D];長(zhǎng)春工業(yè)大學(xué);2016年

3 朱維寧;基于人眼狀態(tài)的駕駛員疲勞檢測(cè)研究[D];南京理工大學(xué);2014年

4 胡敬舒;基于幀間差分的運(yùn)動(dòng)目標(biāo)檢測(cè)[D];哈爾濱工程大學(xué);2013年

5 楊召君;基于視頻人數(shù)統(tǒng)計(jì)與跟蹤改進(jìn)算法的研究與實(shí)現(xiàn)[D];南京郵電大學(xué);2013年

6 呂晉;視頻監(jiān)控系統(tǒng)中目標(biāo)檢測(cè)與陰影抑制算法研究與實(shí)現(xiàn)[D];東北大學(xué);2010年

,

本文編號(hào):2118993

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2118993.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶57f1a***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com