天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 科技論文 > 軟件論文 >

基于機器視覺的人體狀態(tài)監(jiān)測關(guān)鍵技術(shù)研究

發(fā)布時間:2018-07-13 10:06
【摘要】:我國青少年近視發(fā)病率逐年遞增,其中不良坐姿是引起青少年近視的原因之一。在道路交通事故中,很多是由駕駛員注意力分散和疲勞駕駛引起的。針對以上兩個問題,本文基于機器視覺對坐姿狀態(tài)和疲勞狀態(tài)的監(jiān)測進行了研究。在人臉檢測的基礎(chǔ)上針對兩種不同場景分別提出了兩種不同的坐姿行為監(jiān)測方法和疲勞監(jiān)測方法,包括基于人臉膚色統(tǒng)計的坐姿判別、基于區(qū)域關(guān)鍵特征點匹配的頭部狀態(tài)判別、基于嘴巴活動區(qū)域融合邊緣統(tǒng)計的打哈欠判別、基于人眼與瞳孔檢測的閉眼判別四種有效的監(jiān)測方法。并設(shè)計了"不良坐姿行為監(jiān)測系統(tǒng)"和"輔助駕駛中頭部狀態(tài)與疲勞監(jiān)測模擬系統(tǒng)"。首先,在不良坐姿行為監(jiān)測系統(tǒng)中,改進了基于RGB彩色視頻的人臉檢測方法:采用基于膚色的人臉檢測,能夠有效地減少人臉誤檢率;采用最大單目標的人臉檢測方法,能檢測出主要目標的人臉;提出了檢測窗口尺度自適應(yīng)的單目標人臉檢測方法,利用前一幀中所檢測的單個主體目標的尺寸,自適應(yīng)地調(diào)節(jié)檢測窗口的變化范圍,使得檢測速度得到了大大的提升。其次,提出了一種基于人臉膚色統(tǒng)計的坐姿監(jiān)測方法,該方法先根據(jù)檢測出的人臉框規(guī)劃出左、中、右三個膚色判別區(qū)域,然后通過對比這三個區(qū)域的膚色與正確姿態(tài)下的情況,來判別坐姿靠左/靠右;通過統(tǒng)計對比當(dāng)前與正確姿態(tài)下人臉框內(nèi)的膚色面積來判斷靠前/靠后。實驗表明,在避免背景為膚色的前提下該方法對左右的檢測正確率為100%,對前后的檢測正確率為97.3%。針對駕駛員不良頭部狀態(tài)和疲勞狀態(tài)的監(jiān)測,提出了基于主動紅外視頻的三種判別方法:(1)基于區(qū)域關(guān)鍵特征點匹配的頭部狀態(tài)判別方法,分析正確姿態(tài)下的模板與實時監(jiān)測區(qū)域中的三對最佳匹配的SURF特征點的位置,從而判斷出當(dāng)前頭部狀態(tài)正確與否;(2)基于嘴巴活動區(qū)域融合邊緣統(tǒng)計的打哈欠判別方法,統(tǒng)計表明,嘴巴幾乎活動在人臉檢測框中下端的區(qū)域內(nèi),所以打哈欠的判別主要是在人臉框上規(guī)劃出嘴巴活動區(qū)域,然后統(tǒng)計在該區(qū)域內(nèi)Prewitt與Canny算子融合邊緣縱向投影比判斷嘴巴開合程度,并通過開合程度來判斷打哈欠的狀態(tài);(3)基于人眼與瞳孔檢測的閉眼判別方法,根據(jù)人臉框規(guī)劃出眼睛的大概區(qū)域,在此基礎(chǔ)上能夠更好地定位眼睛,大大減少了全局檢測帶來的誤檢,同時提高了檢測效率和正確率,然后將檢測出來的眼睛適當(dāng)放大并做霍夫圓檢測,能夠通過霍夫圓的存在與否判斷眼睛開合狀態(tài)。實驗結(jié)果表明,設(shè)計完成的"輔助駕駛中頭部狀態(tài)與疲勞監(jiān)測的模擬系統(tǒng)"中頭部狀態(tài)判別、打哈欠判別和眼睛疲勞判別模塊的正確率分別為:98.9%、100%、97.8%。
[Abstract]:The incidence of myopia is increasing year by year in China, among which poor sitting posture is one of the causes of juvenile myopia. In road traffic accidents, many of them are caused by driver's distraction and fatigue driving. In view of the above two problems, this paper studies the monitoring of sitting posture and fatigue based on machine vision. On the basis of face detection, two different sitting behavior monitoring methods and fatigue monitoring methods are proposed for two different scenes, including sitting-posture discrimination based on face skin color statistics. There are four effective monitoring methods: head state identification based on matching key feature points of region, yawning discrimination based on fusion edge statistics of active areas of mouth, and closed eyes discrimination based on human eyes and pupil detection. The monitoring system of bad sitting behavior and the simulation system of head state and fatigue monitoring in auxiliary driving are designed. First of all, the method of face detection based on RGB color video is improved in the bad sitting behavior monitoring system: the face detection based on skin color can effectively reduce the false detection rate of face, and the maximum single target face detection method is adopted. A single target face detection method based on window scale adaptive detection is proposed, which adaptively adjusts the range of detection window by using the size of a single object detected in the previous frame. The detection speed is greatly improved. Secondly, a sit-down monitoring method based on face skin color statistics is proposed, in which the left, middle and right skin color discriminant regions are planned according to the detected face frame. Then, by comparing the skin color of the three regions with the correct posture, the left / right side of the sitting position is judged, and the forward / backward position is judged by statistical comparison of the skin color area in the face frame under the current and the correct posture. The experimental results show that the accuracy rate of this method is 100 for left and right and 97.3 for front and rear without background skin color. Aiming at the monitoring of bad head state and fatigue state of driver, three discrimination methods based on active infrared video are proposed: (1) head state discrimination method based on regional key feature point matching; The position of three pairs of best matched SURF feature points in the real time monitoring region and the template under the correct posture are analyzed to determine whether the current head state is correct or not. (2) the yawning judgment method based on the edge statistics of the mouth active region fusion. Statistics show that the mouth almost moves in the lower end of the face detection box, so the yawning is mainly based on the planning of the mouth activity area on the face frame. Then, the longitudinal projection ratio of Prewitt and Canny operator is counted to judge the degree of mouth opening and closing, and the state of yawning is judged by the degree of opening and closing. (3) the method of judging the closed eyes based on the detection of human eyes and pupils. According to the face frame, the eye area can be mapped out, and the eyes can be located better, which greatly reduces the error detection brought by global detection, and improves the detection efficiency and accuracy. Then the detected eyes are enlarged properly and the Hoff circle is detected, which can judge the opening and closing state of the eyes by the existence or not of the Hoff circle. The experimental results show that the correct rates of head state discrimination, yawning discrimination and eye fatigue identification module in the "Simulation system for head condition and fatigue Monitoring in Auxiliary driving" are: 98.9% and 100% and 97.8%, respectively.
【學(xué)位授予單位】:西南科技大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41

【參考文獻】

相關(guān)期刊論文 前10條

1 孫挺;齊迎春;耿國華;;基于幀間差分和背景差分的運動目標檢測算法[J];吉林大學(xué)學(xué)報(工學(xué)版);2016年04期

2 陳仁愛;凌強;徐駿;李峰;;基于DSP的實時圓檢測算法的設(shè)計實現(xiàn)與優(yōu)化[J];微型機與應(yīng)用;2016年11期

3 毛順兵;;Hough變換和輪廓匹配相結(jié)合的瞳孔精確檢測算法[J];計算機應(yīng)用;2016年05期

4 畢雪芹;惠婷;;基于膚色分割與AdaBoost算法的人臉檢測[J];國外電子測量技術(shù);2015年12期

5 劉哲宇;;豪雅·2015中國眼鏡零售業(yè)大型公益視光講座在武漢舉行[J];中國眼鏡科技雜志;2015年15期

6 陳健;鄭紹華;潘林;余輪;;結(jié)合橢圓擬合與參數(shù)傳遞的瞳孔檢測方法[J];儀器儀表學(xué)報;2014年08期

7 段玉波;任璐;任偉建;霍鳳財;;基于膚色分割和改進AdaBoost算法的人臉檢測[J];電子設(shè)計工程;2014年12期

8 屈晶晶;辛云宏;;連續(xù)幀間差分與背景差分相融合的運動目標檢測方法[J];光子學(xué)報;2014年07期

9 袁國武;陳志強;龔健;徐丹;廖仁健;何俊遠;;一種結(jié)合光流法與三幀差分法的運動目標檢測算法[J];小型微型計算機系統(tǒng);2013年03期

10 韓亞偉;張有志;李慶濤;熊春彬;;幀差累積和減背景相結(jié)合的運動對象分割方法[J];計算機工程與應(yīng)用;2011年14期

相關(guān)博士學(xué)位論文 前1條

1 張偉;基于機器視覺的駕駛?cè)似跔顟B(tài)識別關(guān)鍵問題研究[D];清華大學(xué);2011年

相關(guān)碩士學(xué)位論文 前6條

1 秦浩;智能視頻監(jiān)控平臺中區(qū)域目標檢測和聯(lián)合跟蹤的設(shè)計與實現(xiàn)[D];南京郵電大學(xué);2016年

2 周鵬飛;駕駛員疲勞駕駛實時檢測系統(tǒng)設(shè)計與研究[D];長春工業(yè)大學(xué);2016年

3 朱維寧;基于人眼狀態(tài)的駕駛員疲勞檢測研究[D];南京理工大學(xué);2014年

4 胡敬舒;基于幀間差分的運動目標檢測[D];哈爾濱工程大學(xué);2013年

5 楊召君;基于視頻人數(shù)統(tǒng)計與跟蹤改進算法的研究與實現(xiàn)[D];南京郵電大學(xué);2013年

6 呂晉;視頻監(jiān)控系統(tǒng)中目標檢測與陰影抑制算法研究與實現(xiàn)[D];東北大學(xué);2010年

,

本文編號:2118993

資料下載
論文發(fā)表

本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2118993.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶57f1a***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com