Data Engineer PySpark

Sopra Steria

看过: 102

更新日: 15-05-2024

位置: Kanchipuram Tamil Nadu

类别: IT-软件

行业: IT Services IT Consulting

水平: Entry level

工作类型: Full-time

Loading ...

工作内容

Company Description

About Sopra Steria

Sopra Steria, major Tech player in Europe recognised for its consulting, digital services and software development, helps its clients drive their digital transformation and obtain tangible and sustainable benefits. It provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a fully collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. With 50,000 employees in nearly 30 countries, the Group generated revenue of €5.1 billion in 2022.

Job Description

The world is how we shape it.

We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will collaborate closely with our Data Scientists to develop and deploy machine learning models. Proficiency in below listed skills will be crucial in building and maintaining pipelines for training and inference datasets.

Responsibilities

  • Work in tandem with Data Scientists to design, develop, and implement machine learning pipelines.
  • Utilize PySpark for data processing, transformation, and preparation for model training.
  • Leverage AWS EMR and S3 for scalable and efficient data storage and processing.
  • Implement and manage ETL workflows using Stream sets for data ingestion and transformation.
  • Design and construct pipelines to deliver high-quality training and inference datasets.
  • Collaborate with cross-functional teams to ensure smooth deployment and real-time/near real-time inferencing capabilities.
  • Optimize and fine-tune pipelines for performance, scalability, and reliability.
  • Ensure IAM policies and permissions are appropriately configured for secure data access and management.
  • Implement Spark architecture and optimize Spark jobs for scalable data processing.
Requirements

Mandatory

  • Proficiency in Advanced SQL (Window functions), Spark Architecture, Pyspark or Scala with Spark, Hadoop.
  • Proven expertise in designing and deploying data pipelines.
  • Strong problem-solving skills and ability to work effectively in a collaborative team environment.
  • Excellent communication skills and ability to translate technical concepts to non-technical stakeholder
Desirable

  • Hands-on experience with Airflow, S3, and Stream sets or similar ETL tools. [ can be trained locally ]
  • Understanding of real-time or near real-time inferencing architectures.
  • Basic Knowledge on Kafka ,AWS IAM, AWS EMR and Snowflake.
Total Experience Expected: 06-08 years

Qualifications

BE

Additional Information

At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences.

All of our positions are open to people with disabilities.
Loading ...
Loading ...

最后期限: 14-06-2024

点击免费申请候选人

申请

Loading ...
Loading ...

相同的工作

Loading ...
Loading ...