Journée "Intelligence artificielle et big data"

Journée "Intelligence artificielle et big data", à l'Université de Cergy-Pontoise (site de St-Martin 2, amphithéâtre des colloques).

Première partie : Intelligence artificielle

11h00-12h00 : Perception and decision-making for self-driving cars.
Guillaume Bresson – Institut VEDECOM

In this talk, we will discuss the essential functions required by a vehicle to operate autonomously in its environment. We will try to highlight the complexity of building a global architecture and choosing the right sensors. We will also present how some state-of-the-art perception algorithms work and the upcoming challenges. A quick panorama of decision-making for autonomous driving will also be portrayed.

13h30-14h30 : Video analytics & deep learning - an overview.
Jean-Emmanuel Haugeard – Thales Services

Avec l’accroissement des bases de données images et vidéos, il devient difficile pour les opérateurs de traiter l’ensemble des données. Dans ce contexte d’aide à l’opérateur (opérateur de sécurité, opérateur médical, …), le besoin de systèmes intelligents devient majeur. Les nouvelles technologies basées sur des réseaux de neurones peuvent-elles répondre à ce besoin ? Cette présentation est une introduction aux réseaux de neurones dans l’analyse vidéo. Pourquoi un regain d’intérêts pour les réseaux de neurones ? Cette présentation décrira brièvement l’apprentissage profond et présentera les problématiques et résultats liés aux réseaux de neurones dans l’analyse vidéos par des applications métiers (« smart camera » : description de scènes, vidéo surveillance : reconnaissance de personnes, imagerie médicale : segmentation).

14h30-15h30 : Artificial Intelligence, between state of the art and actual use.
Ken Prepin – Société Rakuten

AI has recently become one of the most trending topics in business. AI raises lots of expectations, often idealized and comparable to some magic solution to any problem. The result is that everyone thinks about it, attempts to have it, and wants to communicate about having it.
What companies do really expect AI will solve for them? How AI is actually used and what it actually solves? How innovation labs work on UX + AI in order to ensure user adoption?
We will try to give some insights on these questions, based on the current startups offer and older companies internal transformation strategies.

Deuxième partie : Big Data

15h45-16h30 : Data versioning with Hbase & Lucene.
Jean-François Boeuf – Orchestra Networks

Data versioning is, and always has been, a challenging problem in data management. Long-lived & ACID transactions, high-availability systems, OLTP workloads, and temporal databases, all rely heavily on data versioning.
Multi-version and temporal databases have received a lot of attention in the early 90s but research and technology has advanced significantly since then. Nowadays, in the era of Big Data, we have at our disposal many systems capable of handling many GBs of data in a few seconds. How can 20+ years of research along with the-state-of-the-art systems in data management be combined to build a powerful multi-versioned database?

16h30-17h15 : Data management vs. data journalism and fact checking: goals, tools and architectures.
Ioana Manolescu – INRIA SACLAY

The tremendous value of Big Data has been noticed of late also by the media, and the term "data journalism'' has been coined to refer to journalistic work inspired by digital data sources. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; since 2012, however, it has increasingly focused on the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web (social and otherwise).
These trends have been noticed by computer scientists working in the industry and academia. Thus, a very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools.
In this talk, I will introduce journalistic fact-checking, show which areas of digital content management research, in particular those relying on the Web, can be leveraged to help fact-checking, and gives pointers to survey of efforts in this area. I will also discuss ongoing research going in this direction, performed within the ContentCheck ANR project.

17h15-18h00 : Data Engineers: what do they do? what do they eat? How do they reproduce?
Andre Fonseca - Deezer

Exploring data and making sense of our users behavior, is a huge priority at Deezer and also a key tool to place our company on one of the top music streaming services of the market. In this context, data scientists need to play and visualize data in a simple and efficient fashion, but also, be able to deploy their code in production in a resilient way. This is a hard challenge for many companies, where the expertise of data scientists is focused towards the understanding of data and on building mathematical models to allow this data to be treated automatically by machines. In this context, many times, data engineers enable data scientists to go further, providing methods to exploit, scale and secure data pipelines on production level.
In this talk, we will discuss about how this is done at Deezer, giving some examples of how those two distinct roles works together to achieve better results when dealing with a huge amount of information.