This publication was issued for the 25th International Conference on MultiMedia Modeling, held in Thessaloniki, Greece on 8-11 January, 2019 and was written by Alexander Schindler et al from AIT, Austria.
It has been published in the refereed proceedings of the conference under the DOI: 10.1007/978-3-030-05716-9 or can be also found at

The forensic investigation of a terrorist attack poses a huge challenge to the investigative authorities, as several thousand hours of video footage need to be spotted. To assist law enforcement agencies (LEA) in identifying suspects and securing evidences, the platform presented in this paper has been developed. This platform integrates analytical modules on a scalable architecture. Videos are analyzed according their acoustic and visual content. Specifically, Audio Event Detection is applied to index the content according to attack-specific acoustic concepts. Audio similarity search is applied to identify similar video sequences recorded from different perspectives. Visual object detection is applied to index the content according to relevant concepts. This index of visual and acoustic concepts makes it possible to quickly start an investigation, follow traits and investigate hints from eyewitnesses.

This publication was issued for the 25th International Conference on MultiMedia Modeling, held in Thessaloniki, Greece on 8-11 January, 2019 and was written by P. Guyot et al, from IRIT, Toulouse.
It has been published in the refereed proceedings of the conference under the DOI: 10.1007/978-3-030-05710-7_33.


Audio and video parts of an audiovisual document interact to produce an audiovisual, or multi-modal, perception. Yet, automatic analysis on these documents are usually based on separate audio and video annotations. Regarding the audiovisual content, these annotations could be incomplete, or not relevant. Besides, the expanding possibilities of creating audiovisual documents lead to consider different kinds of contents, including videos filmed in uncontrolled conditions (i.e. fields recordings), or scenes filmed from different points of view (multi-view).
In this paper we propose an original procedure to produce manual annotations in different contexts, including multi-modal and multi-view documents. This procedure, based on using both audio and video annotations, ensures consistency considering audio or video only, and provides additionally audiovisual information at a richer level.
Finally, different applications are made possible when considering such annotated data. In particular, we present an example application in a network of recordings in which our annotations allow multi-source retrieval using mono or multi-modal queries. 

Publication provided for the European Intelligence and Security Informatics Conference (EISIC) 2018, held on 23-25 October 2018 in Sweden and written by D. Schreiber, M. Boyer, E. Broneder, A. Opitz, S. Veigl from AIT.
The EISIC conference book will be published under reference DOI 10.1109/EISIC.2018.00024.

Video recordings have become a major resource for legal investigations after crimes and terrorist acts. However, currently no mature video investigation tools are available and trusted by LEAs. The project VICTORIA addresses this need and aims to deliver a video analysis platform that will accelerate video analysis tasks by a factor of 15 to 100 (depending on the use case). In this paper we describe concept and work in progress done by AIT GmbH within the project, namely developing a state-of-the-art tool for generic object detection and tracking in videos. We develop a detection, classification and tracking tool, based on deep convolutional and recurrent neural networks, trained on a large number of object classes, and optimized for the project context. Tracking is extended to the multi-class multi-target case. This generic object and motion analytics is integrated with a novel framework developed by AIT, denoted as Connected Vision, which provides a modular and service-oriented (scalable) approach, allowing to process computer vision tasks in a distributed manner. We report encouraging intermediate results in terms of accuracy and performance.

Publication issued for the "BDVA 2018" Big Data Visual and Immersive Analytics Symposium, 17-19/10/2018, Germany, written by Niklas Weiler et al, from the University of Konstanz, Germany.
Submitted for inclusion to IEEE Xplore.

During criminal investigations, every second saved can be valuable to catch a suspect or to prevent further damage. However, sometimes the amount of evidence that needs to be investigated is so large, that it can not be processed fast enough.  Especially after incidents in public,  the law enforcement agencies receive a lot of video and image material from persons and surveillance cameras. Currently,all these videos are viewed manually and annotated by criminal investigators. The goal of our tool is to make this process faster by allowing the investigators to watch a combination of several videos at the same time and giving them a common spatial and temporal reference.

Having Yes, Using No? About the new legal regime for biometric data, DOI: org/10.1016/j.clsr.2017.11.004 Computer Law and Security Review - Volume 34, Issue 3, June 2018, Pages 523-538 -  E.J. Kindt, KU Leuven – Citip, Leuven, Belgium.

The rise of biometric data use in personal consumer objects and governmental (surveillance) applications is irreversible. This article analyses the latest attempt by the General Data Protection Regulation (EU) 2016/679 and the Directive (EU) 2016/680 to regulate biometric data use in the European Union....

"How Machine Learning Generates Unfair Inequalities and How Data Protection Instruments May Help in Mitigating Them" has been written by Laurens Naudt, KUL, CITIP. This is a chapter of the Data Protection and Privacy book issued after the eleventh annual International Conference on Computers, Privacy, and Data Protection, CPDP 2018, held in Brussels in January 2018.
This volume offers conceptual analyses, highlight issues, propose solutions, and discuss practices regarding privacy and data protection.
The book is available at the following link.

Publication issued for the "Data for Policy" event, 06-07/09/2017, London., 18 August 2017, Laurens Naudts, KU Leuven, imec-CITIP (Centre for IT & IP Law)

Differentiation is often intrinsic to the functioning of algorithms. Within large data sets, ‘differentiating grounds’, such as correlations or patterns, are found, which in turn, can be applied by decision-makers to distinguish between individuals or groups of individuals. As the use of algorithms becomes more wide-spread, the chance that algorithmic forms of differentiation result in unfair outcomes increases. Intuitively, certain (random) algorithmic, classification acts, and the decisions that are based on them, seem to run counter to the fundamental notion of equality....