Junior Data Engineer
Electrolux AB / Datajobb / Stockholm
Observera att sista ansökningsdag har passerat.
Visa alla datajobb i Stockholm,
Solna,
Lidingö,
Sundbyberg,
Danderyd eller i
hela Sverige Visa alla jobb hos Electrolux AB i Stockholm,
Lilla Edet,
Ljungby,
Göteborg,
Kristianstad eller i
hela Sverige ELECTROLUX Group IT Data Science presents
Bring your innovative ideas around Big Data
For us going to work everyday has an even greater purpose than putting the latest product or technology on the market. It's about improving the everyday lives of millions. By staying humble and open for new ideas - we can push the boundaries for cooking, cleaning and wellbeing at home. But to keep doing so, we need more people who want to innovate and re-imagine what life at home can be.
Junior Data Engineer
As an enterprise producing millions of appliances each year and counting more than 50.000 employees, Electrolux is already generating massive amounts of data in hundreds of systems across the globe from sales to sensors in factory robots or smart home appliances. And with an explosion of IoT devices just around the corner, these data volumes are expected to grow in the near future by an order of magnitude or more.
By using state-of-the-art data science platforms and methodology our team supports business functions across the entire company and helps them turning raw data into valuable insights and actions.
The Junior Data Engineer works as part of the Electrolux Data Science team, a key enabler of the company's digital agenda and the groups center of excellence regarding big data, advanced analytics and machine learning.
A REGULAR DAY AT WORK
You will
• Drive the platform engineering role for the Electrolux GDS data platform in the cloud and on-premises.
• Deliver solutions in data science engineering and data sciences to the business units.
• Collaborate with the data scientists in assisting, engineering full-stack pipelines for Data Sciences and Analytics algorithms.
• Collaborate with the service managers on existing and new business use cases and associated external communications, stakeholder-management.
• Collaborate in engineering roles in the verticals including platform, data-assets, and machine learning.
• Support the team's effort in working with other teams in order to identify, define, and implement secure and reliable integration patterns to connect to the GDS data platforms.
• Identify and contribute to the best practices and design-patterns in the data related engineering, asset management, and ML stack.
• Take up ownership in the implementation of the automated common workflows for the data/ETL (onboarding, ingestion).
• Collaborate with the DevOps engineers to commercialize automated deployment workflows (CI/CD, Pipelines in Data Sciences/Analytics/Big Data).
• Identify, Design, and Maintain Data Assets.
• Identify, Design, and Maintain tools and technologies (vendor-specific, in-house, open-source) used by the team.
• Work in established project management frameworks including Scrum (Agile/Kanban), Atlassian Stack (Jira, Confluence, Bitbucket, SourceTree, Trello, etc.)
YOU
• Proactive. You are self-driven, results-oriented with a positive outlook. You're not just solving the tasks, you always think ahead and identifying operational issues and drive resolution
• Results oriented. You are strong customer focused and have ability to develop and sustain a network with cross-functional teams.
• A smart risk taker. You know when to, and when not to challenge conventions.
• Pragmatic. Your solutions are always realistic and doable.
• Flexible. You react quickly and positively to changing environment.
EDUCATION AND EXPERIENCE
• 0-2 years experience using the following software/tools:
• Hadoop, Spark, Kafka, etc.
• Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
• Experience with Data and Model pipeline and workflow management tools: Azkaban, Luigi, Airflow, Dataiku, etc.
• Experience with stream-processing systems: Storm, Spark-Streaming, etc.
• Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
• Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of other databases/date-sources.
• Experience building and optimizing 'big data' data pipelines, architectures and data sets.
• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and seek opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• BS, MS or PhD degree in Computer Science, Informatics, Information Systems or another related field.
• Excellent English knowledge, written and spoken.
Publiceringsdatum2018-11-19Så ansöker duSista dag att ansöka är 2018-12-04
FöretagElectrolux AB
KontorsadressSANKT GÖRANSGATAN 143
Jobbnummer 4465677
Observera att sista ansökningsdag har passerat.