Experienced Data Engineer with Strong Pyspark, Celery, Redis, Docker and Kubernetes Skills

Posted

7/28/2024

Role

Data Engineer

Location

Latvia

Remote

Discuss with client

Project Description

Seeking an experienced Data Engineer with a strong background in Pyspark, Celery, Redis, Docker, and Kubernetes to explain and share source code from previous projects with robust data pipelines and infrastructure. The focus will be on optimizing data ingestion, transformation, and storage processes, requiring a deep understanding of distributed computing frameworks like Pyspark, as well as knowledge of containerization technologies such as Docker and Kubernetes. Familiarity with Celery and Redis is a must for this project, which offers a budget of $20 and falls under the Data Engineering category.

Skills

Amazon Web ServicesDevopsDockerKubernetesPython
Go to Project Page