The ever-growing throughput and quality demands of modern manufacturing make it impossible to rely on the human eye for a rising number of quality assessment procedures. This development lead to the introduction of computer vision algorithms, which are now widely used in different fields like the food industry or the production of printed board-circuits. Most of these approaches rely on handcrafted algorithms to recognize domain specific faults. The development of these solutions requires domain knowledge and their specific nature makes them susceptible to changes on the production side or the product specifications. The current evolution of the manufacturing domain towards the so-called Industry 4.0 demands for more flexible solutions, which can be introduced without extensive prior study of domain characteristics. Deep Neural Networks (DNNs) provide this by automatically learning high level features. DNNs reach state-of-the-art performance on various machine learning tasks like object or speech recognition. Wide-spread application of this emerging technology in industry is mainly hampered by two factors: high hardware demands and lacking explainability of classification decisions. The lacking explainability of DNN decisions is a consequence of the autonomous feature learning. Neural networks tend to rely heavily on features which are unintuitive for human perception. This makes it difficult to justify decisions without profound knowledge of the technology. In consequence, DNNs are currently unsuited for human-machine-interaction, which is a major design principle of Industry 4.0. The contribution of this project can be summarized as follows: 1) optimization of DNNs for computation on the edge, 2) improvement of the explainability of DNN decisions, 3) leverage of transfer learning and a company-specific knowledge history to support whole product families and 4) integration of the aforementioned concepts into the open source framework GreyCat.