Owner Jimi Wikman Posted April 14, 2020 Owner Share Posted April 14, 2020 Description: In this role you'll be responsible for building Spark and Hadoop based systems that power the data pipeline for multiple key features of our large data. You’ll develop algorithms to match, conflate, identify anomalies, as well as improve the simplicity, scale, and efficiency of client´s systems. Responsibilities Identify requirements of new features, and propose design and solution Implement features in a suitable programming language Take ownership of delivering features and improvements on time Must-have Qualifications Strong programming and design skills Deep knowledge and extensive experience from Scala and/or Java Professional experience working with Spark Excellent analytical and problem solving skills Excellent oral and written communication skills (English) Extra Merit Qualifications Kafka Ability to design and implement APIs and REST services Experience with one or more application frameworks like Akka, Play (Scala), Spring Experience in big data ETL and data streaming NoSQL Databases OpenJump etc. Start: ASAPDuration: 6 month +Work location: Malmö, SwedenRequirements: Min. 5 years of professional IT experience.Job type: Freelance View the full article Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now