We are seeking a highly motivated and experienced Data Engineer to join our team and be part of a new large-scale infrastructure for visual data processing and machine learning. This is a unique challenge, as you will be in charge or researching, designing and developing complex ETL processes, where multi-source, highly variable data is the name of the game.


  • Design and implement the data pipeline from scratch
  • Build monitoring tools to support data integrity and visualization
  • Work closely with our deep learning researchers and engineers
  • Process huge image and video data sets of varying quality from various data sources
  • Own data integrity and quality.


    • Bachelor’s degree in computer science or engineering
    • At least 4 years of experience in software development (Python/Go/Scala/NodeJS)
    • At least 2 years in designing and implementing data pipeline processes
    • Developed data solutions such as building an enterprise data lake and data warehouse
    • Developed complex ETL processes using tools such as Spark, Hive, Luigi, Hadoop
    • Experience in distributed systems – microservices, containers, parallelism – advantage

Get in touch