What the job entails:
- Design and operate scalable solutions for processing, analyzing, and handling large quantities of data.
- Deliver insights and predictions internally.
- Collaborate with Machine Learning Engineers to bring machine learning applications into production with effective data pipelines and data models.
- Treat data as a product, foster monitoring as a base of architectural decisions.
What to bring in order to be successful:
- Must have a cloud-first mentality and embrace the paradigm of cloud services and microservices.
- Must have extensive experience in building, scaling, and maintaining data pipelines.
- Experience with one or more big data frameworks like Apache Beam (Dataflow), Spark, Flink, Kafka, Apache Pulsar, Presto, or Hive
- Experience with SQL and no-SQL databases.
- Experience with Docker and Kubernetes.
- Experience with DASK, Prefect, Kubeflow is a plus
- Experience in implementing data warehouse solutions is a plus.
- Experience with GCP is a plus.
Benefits & Perks
- The right hardware
- Office with a gym, sauna, lunch
- Flexible holiday time arrangement
- Part of one of the leading AI teams in NL