Portime II - Robust and Reliable Vision (POse determination in Real-Time In Manufacturing Environments)
Portime II - Robust and Reliable Vision (POse determination in Real-Time In Manufacturing Environments)
Disciplines
Electrical Engineering, Electronics, Information Engineering (70%); Computer Sciences (30%)
Keywords
-
INTELLIGENTER SENSOR,
LAGEBESTIMMUNG,
ROBUSTE BILDVERARBEITUNG,
ZUVERLÄSSIGKEIT,
SENSORKOPPLUNG FÜR ROBOTER
The objective of PORTIME II is to develop robust and reliable methods to determine the pose (= position and orientation) of objects from one or a series of visual sensor images. This technique makes the flexible control of mechanisms and robots possible and opens new applications. PORTIME II is the continuation of the fundamental research work of the PORTIME project. Within PORTIME (Pose Determination in Real-time in Manufacturing Environments) a modular concept has been developed that integrates the key functions of object detection, feature tracking, and pose determination. The result is an intelligent sensor system (ISS) that consists of a fully controlled (zoom, focus, iris) colour camera and the visual processing functions needed to determine the pose. The output of the ISS is the control signals to steer a mechanism or robot at frame rate (video rate, 40 ms). The status of the work and the demonstrations confirm that the modular and integrating concept is capable to attain the goal of correcting robot motions with on-line visual feedback. However, presently the widespread exploitation of this technique is limited to restricted environments. The restrictions are caused by the low robustness and reliability of the image processing methods. These are precisely the issues which will be solved in the PORTIME II project. First steps towards a solution of this problem have already been made. The robustness of line detection could be considerably improved by merging edge information with intensity and colour values and with a probabilistic edge finding approach. Research work which investigates the excellent capabilities of the human vision system comes to the conclusion that the integration of cues plays a fundamental role. The integration exploits the natural redundancy of cues to improve the robustness as well as the reliability. As a consequence, the objective of PORTIME II is to devise a fundamental theory to integrate a large number of different cues from images. Cues are focused on but are not limited to edge, intensity, colour, region, motion, depth, disparity and stereo. The integration also encompasses the information on the object, since this knowledge is available in the task description and can be exploited to further enhance robustness. The modular system developed is the basis for the integrating image processing system to build. Besides the improvement of robustness through the use of redundant cues, the confidence measure of cues will be utilised to obtain a measure of the reliability of object detection and pose determination. The assembly of a light switch will be the final demonstration. The switch consists of parts of homogeneous materiel (this renders the problem more difficult but it is the common fact of today`s products) and hence the successful assembly can demonstrate the capabilities of the method.
The objective of this project is to develop robust and reliable methods to determine the pose of objects from one or a series of visual sensor images. This technique makes the flexible control of mechanisms and robots possible and opens new applications. Vision-based control of motion is the technique of controlling the degrees of freedom (DOF) of a mechanism using the signals generated from camera images. Most often the objective is to control a motion in three-dimensional space, that is to control pose in 6 DOF (= position and orientation). The solution method comprises image acquisition, object detection, and pose determination. Detection is most time consuming and often replaced by a tracking method after an initial detection step. The time consuming detection is executed only once and assures that the right object is found. Tracking operates relative and tries to re-find the initialised object at each control cycle. The goal of this project is to device robust and reliable vision and control methods for these two critical tasks, detection and tracking. For the detection of the target a multi-spectral classification method is used to evaluate pixel-based image cues such as colour, texture and range. As a first detection indicator colour is powerful. However, the colour perceived with a camera depends strongly on the actual lighting and environmental situation. Human vision exhibits colour constancy for an object under a wide range of illumination conditions. A similar ability is required if computer vision systems use colour cues to find objects in uncontrolled environments. This problem was solved by the following approach. The key idea is that the robot searching for the object carries a colour template. Before searching for the target the robot calibrates its camera to different illumination conditions by looking at this reference template. The results show that, even under severe lighting conditions such as blue or very focused light, neon or direct sun light, this approach obtained correct target detection. After target detection the target will be tracked using the software tool developed "Vision for Robotics" (V4R). V4R determines the object pose with a rate of up to 50 Hz. The pose signal is then used to control the active camera head or a robot. The main problem of the control structure is the time delay introduced due to the vision system. To overcome this problem, tools have been investigated to predict the target state in the future. In this approach we use different strategies of the prediction based on motion-equations (Kalman Filter and aß-Filter) and on a model of the mechanism (Model Predictive Control). The prediction filters predict the state of the target while the controller calculates signals which moves the mechanism in the direction of the target using the predicted position of the mechanism. It is very important, that the prediction quality of the Filter is good to ensure a good reaction. The difference between the Kalman Filter and the aß-Filter is the possibility to adjust the Kalman Filter to the state of the motion (the aß-Filter operates independently of the actual state of the motion). Therefore the Kalman Filter shows a superior reaction to predictable motions (sinus) and to noisy input. The Kalman Filter has on the other side problems with sudden changes (start of a ramp), where the aß-Filter provides better response characteristics. To combine these advantages a prediction strategy was designed where a supervisor (Prediction Monitor) detects sudden changes in the motion and switches between different types of models for the predictor. When using the information of the Prediction Monitor it is also possible to influence the Controller in such a way that a smooth approach after sudden changes is achieved.
- Technische Universität Wien - 100%
Research Output
- 212 Citations
- 4 Publications
-
2019
Title The effects of forest cover and disturbance on torrential hazards: large-scale evidence from the Eastern Alps DOI 10.1088/1748-9326/ab4937 Type Journal Article Author Sebald J Journal Environmental Research Letters Pages 114032 Link Publication -
2001
Title Robvision DOI 10.1109/mfi.2001.1013517 Type Conference Proceeding Abstract Author Ponweiser W Pages 109-114 -
2020
Title The influence of climate change and canopy disturbances on landslide susceptibility in headwater catchments DOI 10.1016/j.scitotenv.2020.140588 Type Journal Article Author Scheidl C Journal Science of The Total Environment Pages 140588 Link Publication -
2019
Title What drives the future supply of regulating ecosystem services in a mountain forest landscape? DOI 10.1016/j.foreco.2019.03.047 Type Journal Article Author Seidl R Journal Forest Ecology and Management Pages 37-47 Link Publication