In this role you'll be responsible for building Spark and Hadoop based systems that power the data pipeline for multiple key features of our large data.
You’ll develop algorithms to match, conflate, identify anomalies, as well as improve the simplicity, scale, and efficiency of client´s systems.
Responsibilities
Identify requirements of new features, and propose design and solution
Implement features in a suitable programming language
Take ownership of delivering features and improvements on time
Must-have Qualifications
Strong programming and design skills
Deep knowledge and extensive experience from Scala
Professional experience working with Spark
Excellent analytical and problem solving skills
Excellent oral and written communication skills (English)
Extra Merit Qualifications
Kafka
Ability to design and implement APIs and REST services
Experience with one or more application frameworks like Akka, Play (Scala), Spring
Experience in big data ETL and data streaming
NoSQL Databases
OpenJump etc.
Start: As soon as the right candidate is found Duration: 6 months + Work location: Malmö, Sweden Requirements: Min. 5 years of professional IT experience. Job type: Freelance
In this role you'll be responsible for building Spark and Hadoop based systems that power the data pipeline for multiple key features of our large data.
You’ll develop algorithms to match, conflate, identify anomalies, as well as improve the simplicity, scale, and efficiency of client´s systems.
Responsibilities
Must-have Qualifications
Extra Merit Qualifications
- NoSQL Databases
OpenJump etc.Start: As soon as the right candidate is found
Duration: 6 months +
Work location: Malmö, Sweden
Requirements: Min. 5 years of professional IT experience.
Job type: Freelance
View the full article