基于機器視覺的人體狀態(tài)監(jiān)測關(guān)鍵技術(shù)研究
[Abstract]:The incidence of myopia is increasing year by year in China, among which poor sitting posture is one of the causes of juvenile myopia. In road traffic accidents, many of them are caused by driver's distraction and fatigue driving. In view of the above two problems, this paper studies the monitoring of sitting posture and fatigue based on machine vision. On the basis of face detection, two different sitting behavior monitoring methods and fatigue monitoring methods are proposed for two different scenes, including sitting-posture discrimination based on face skin color statistics. There are four effective monitoring methods: head state identification based on matching key feature points of region, yawning discrimination based on fusion edge statistics of active areas of mouth, and closed eyes discrimination based on human eyes and pupil detection. The monitoring system of bad sitting behavior and the simulation system of head state and fatigue monitoring in auxiliary driving are designed. First of all, the method of face detection based on RGB color video is improved in the bad sitting behavior monitoring system: the face detection based on skin color can effectively reduce the false detection rate of face, and the maximum single target face detection method is adopted. A single target face detection method based on window scale adaptive detection is proposed, which adaptively adjusts the range of detection window by using the size of a single object detected in the previous frame. The detection speed is greatly improved. Secondly, a sit-down monitoring method based on face skin color statistics is proposed, in which the left, middle and right skin color discriminant regions are planned according to the detected face frame. Then, by comparing the skin color of the three regions with the correct posture, the left / right side of the sitting position is judged, and the forward / backward position is judged by statistical comparison of the skin color area in the face frame under the current and the correct posture. The experimental results show that the accuracy rate of this method is 100 for left and right and 97.3 for front and rear without background skin color. Aiming at the monitoring of bad head state and fatigue state of driver, three discrimination methods based on active infrared video are proposed: (1) head state discrimination method based on regional key feature point matching; The position of three pairs of best matched SURF feature points in the real time monitoring region and the template under the correct posture are analyzed to determine whether the current head state is correct or not. (2) the yawning judgment method based on the edge statistics of the mouth active region fusion. Statistics show that the mouth almost moves in the lower end of the face detection box, so the yawning is mainly based on the planning of the mouth activity area on the face frame. Then, the longitudinal projection ratio of Prewitt and Canny operator is counted to judge the degree of mouth opening and closing, and the state of yawning is judged by the degree of opening and closing. (3) the method of judging the closed eyes based on the detection of human eyes and pupils. According to the face frame, the eye area can be mapped out, and the eyes can be located better, which greatly reduces the error detection brought by global detection, and improves the detection efficiency and accuracy. Then the detected eyes are enlarged properly and the Hoff circle is detected, which can judge the opening and closing state of the eyes by the existence or not of the Hoff circle. The experimental results show that the correct rates of head state discrimination, yawning discrimination and eye fatigue identification module in the "Simulation system for head condition and fatigue Monitoring in Auxiliary driving" are: 98.9% and 100% and 97.8%, respectively.
【學(xué)位授予單位】:西南科技大學(xué)
【學(xué)位級別】:碩士
【學(xué)位授予年份】:2017
【分類號】:TP391.41
【參考文獻】
相關(guān)期刊論文 前10條
1 孫挺;齊迎春;耿國華;;基于幀間差分和背景差分的運動目標檢測算法[J];吉林大學(xué)學(xué)報(工學(xué)版);2016年04期
2 陳仁愛;凌強;徐駿;李峰;;基于DSP的實時圓檢測算法的設(shè)計實現(xiàn)與優(yōu)化[J];微型機與應(yīng)用;2016年11期
3 毛順兵;;Hough變換和輪廓匹配相結(jié)合的瞳孔精確檢測算法[J];計算機應(yīng)用;2016年05期
4 畢雪芹;惠婷;;基于膚色分割與AdaBoost算法的人臉檢測[J];國外電子測量技術(shù);2015年12期
5 劉哲宇;;豪雅·2015中國眼鏡零售業(yè)大型公益視光講座在武漢舉行[J];中國眼鏡科技雜志;2015年15期
6 陳健;鄭紹華;潘林;余輪;;結(jié)合橢圓擬合與參數(shù)傳遞的瞳孔檢測方法[J];儀器儀表學(xué)報;2014年08期
7 段玉波;任璐;任偉建;霍鳳財;;基于膚色分割和改進AdaBoost算法的人臉檢測[J];電子設(shè)計工程;2014年12期
8 屈晶晶;辛云宏;;連續(xù)幀間差分與背景差分相融合的運動目標檢測方法[J];光子學(xué)報;2014年07期
9 袁國武;陳志強;龔健;徐丹;廖仁健;何俊遠;;一種結(jié)合光流法與三幀差分法的運動目標檢測算法[J];小型微型計算機系統(tǒng);2013年03期
10 韓亞偉;張有志;李慶濤;熊春彬;;幀差累積和減背景相結(jié)合的運動對象分割方法[J];計算機工程與應(yīng)用;2011年14期
相關(guān)博士學(xué)位論文 前1條
1 張偉;基于機器視覺的駕駛?cè)似跔顟B(tài)識別關(guān)鍵問題研究[D];清華大學(xué);2011年
相關(guān)碩士學(xué)位論文 前6條
1 秦浩;智能視頻監(jiān)控平臺中區(qū)域目標檢測和聯(lián)合跟蹤的設(shè)計與實現(xiàn)[D];南京郵電大學(xué);2016年
2 周鵬飛;駕駛員疲勞駕駛實時檢測系統(tǒng)設(shè)計與研究[D];長春工業(yè)大學(xué);2016年
3 朱維寧;基于人眼狀態(tài)的駕駛員疲勞檢測研究[D];南京理工大學(xué);2014年
4 胡敬舒;基于幀間差分的運動目標檢測[D];哈爾濱工程大學(xué);2013年
5 楊召君;基于視頻人數(shù)統(tǒng)計與跟蹤改進算法的研究與實現(xiàn)[D];南京郵電大學(xué);2013年
6 呂晉;視頻監(jiān)控系統(tǒng)中目標檢測與陰影抑制算法研究與實現(xiàn)[D];東北大學(xué);2010年
,本文編號:2118993
本文鏈接:http://sikaile.net/kejilunwen/ruanjiangongchenglunwen/2118993.html