Data Engineer with Apache Kafka, PySpark, and Tableau

Posted

7/14/2024

Role

Data Engineer

Location

Switzerland

Remote

Discuss with client

Project Description

A client is seeking an experienced data engineer with expertise in AWS EC2 configuration, Apache Kafka, Spark/PySpark, Tableau, and API integration to set up an AWS-based data processing pipeline for analyzing historical gold prices and US inflation data. The project involves configuring an AWS EC2 instance, collecting and processing historical and real-time data, creating dynamic visualizations, performing inflation-adjustment calculations, and ensuring continuous API updates. The deliverables include a functional data pipeline, Kafka streaming setup, Spark/PySpark scripts, a Tableau dashboard with dynamic updates, API integrations, and detailed documentation. The project deadline is 3-7 days with a budget of $200. The client welcomes proposals and inquiries about the project scope, particularly regarding inflation adjustment methodology and visualization requirements.

Skills

Apache HadoopApache KafkaApache NifiApache SparkEtl Pipeline
Go to Project Page