Direkt zum Inhalt
Symbolfoto: Das AIT ist Österreichs größte außeruniversitäre Forschungseinrichtung

AIT Vorträge auf dem International Symposium on Visual Computing (ISVC)

17.02.2015
AIT Experte präsentierte AIT Bildverarbeitungstechnologien auf der ISVC Conference in Las Vegas.

Im Rahmen des 10. International Symposium of Visual Computing (ISVC 2014) präsentierte der AIT-Bildverarbeitungsexperte Reinhold Huber-Mörk drei Vorträge aus dem Forschungsbereich Intelligent Vision Systems in Las Vegas.

Das "International Symposium of Visual Computing (ISVC)" ist ein Forum für ForscherInnen, WissenschaftlerInnen und IngenieurInnen auf der ganzen Welt, um ihre neuesten Ideen, Forschungsergebnisse, Entwicklungen und Anwendungen im Bereich des Visual Computing zu präsentieren und auszutauschen. Es gehört zu den wichtigsten Bildverarbeitungsmessen weltweit.

Die auf den Vorträgen basierenden AIT-Publikationen steigern die Sichtbarkeit des AIT in der Community und leisten einen essentiellen Beitrag für die Technologievermarktung.

Titel & Abstracts:

  • Convolutional Neural Networks for Steel Surface Defect Detection from Photometric Stereo Images:
    Convolutional neural networks (CNNs) achieved impressive recognition rates in image classification tasks recently. In order to exploit those capabilities, we trained CNNs on a database of photometric stereo images of metal surface defects, i.e. rail defects. Those defects are cavities in the rail surface and are indication for further surface degradation right up to rail break. Due to security issues, defects have to be recognized early in order to take countermeasures in time. By means of differently colored light-sources illuminating the rail surfaces from different and constant directions, those cavities are made visible in a photometric dark-field setup. So far, a model-based approach has been used for image classification, which expressed the expected reflection properties of surface defects in contrast to non-defects. In this work, we experimented with classical CNNs trained in pure supervised manner and also explored the impact of regularization methods such as unsupervised layer-wise pre-training and training data-set augmentation. The classical CNN already distinctly outperforms the model-based approach. Moreover, regularization methods yet yield further improvements.
  • Depth Estimation within a Multi-Line-Scan Light-Field Framework:
    We present algorithms for depth estimation from light-field data acquired by a multi-line-scan image acquisition system. During image acquisition a 3-D light field is generated over time, which consists of multiple views of the object observed from different viewing angles. This allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based depth estimation. We compare several approaches based on testing various slope hypotheses in the EPI domain, which can directly be related to depth. The considered methods used in hypothesis assessment, which belong to a broader class of block-matching algorithms, are modified sum of absolute differences (MSAD), normalized cross correlation (NCC), census transform (CT) and modified census transform (MCT). The methods are compared w.r.t. their qualitative results for depth estimation and are presented for artificial and real-world data.
  • Shape from Refocus:
    We present a method exploiting computational refocusing capabilities of a light-field camera in order to obtain 3D shape information. We consider a light-field constructed from the relative motion between a camera and observed objects, i.e. points on the object surface are imaged under different angles along the direction of themotion trajectory. Computationally refocused images are handled by a shape-from-focus algorithm. A linear sharpness measure is shown to be computational advantageous as computational refocusing to a specific depth and sharpness assessment of each refocused image can be reordered. We also present a view matching method which further stabilizes the suggested procedure when fused with sharpness assessment. Results for real-world objects from an inspection task are presented. Comparison to ground-truth data showed average depth errors on the order of magnitude of 1 mm for a depth range of 1 cm.

Link: