Data And Analytics_Big Data_44_3

EY

Ver: 111

Día de actualización: 30-05-2024

Ubicación: Trivandrum Kerala

Categoría: IT - Software

Industria: Information Technology Services Computer Software Financial Services

Posición: Entry level

Tipo de empleo: Full-time

Loading ...

Contenido de trabajo

At EY, you will have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we are counting on your unique voice and perspective to help EY become even better too. Join us and build an exceptional experience for yourself, and a better working world for all.

Strong understanding & familiarity with all Hadoop Ecosystem components and Hadoop Administrative Fundamentals
  • Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms
  • Experience in the development of Hadoop APIs and MapReduce jobs for large scale data processing.
  • Experience in architecting big data solutions with proven track record in driving business success
  • Hands-on programming experience in Apache Spark using SparkSQL and Spark Streaming or Apache Storm
  • Hands on experience with major components like Hive, PIG, Spark, MapReduce
  • Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
  • Experienced in Hadoop clustering and Auto scaling.
  • Good knowledge in apache Kafka & Apache Flume
  • Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions
  • Knowledge of Apache Oozie based workflow
  • Hands-on expertise in cloud services like AWS, or Microsoft Azure
  • Experience with databricks, glue, python, step functions or ADF
  • Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Hadoop and Cassandra.
  • Experience with BI, and data analytics databases
  • Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc.
  • Experience in Enterprise grade solution implementations.
  • Knowledge in Big data architecture patterns [Lambda, Kappa]
  • Experience in performance bench marking enterprise applications
  • Experience in Data security [on the move, at rest] and knowledge of data standards like APRA, BASEL etc
  • Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis.
  • Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud
  • Recommend design alternatives for data ingestion, processing and provisioning layers
  • Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop technologies
  • Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies
  • Strong UNIX operating system concepts and shell scripting knowledge
  • Knowledge of microservices and API development
  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
  • Excellent communicator (written and verbal formal and informal).
  • Ability to multi-task under pressure and work independently with minimal supervision.
  • Strong verbal and written communication skills.
  • Must be a team player and enjoy working in a cooperative and collaborative team environment.
  • Adaptable to new technologies and standards.
  • Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support.
  • Minimum 8 years hand-on experience in one or more of the above areas.
  • Minimum 12 years industry experience

Responsibilities, Qualifications, Certifications - Internal
  • Strong understanding & familiarity with all Hadoop Ecosystem components and Hadoop Administrative Fundamentals
  • Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms
  • Experience in the development of Hadoop APIs and MapReduce jobs for large scale data processing.
  • Experience in architecting big data solutions with proven track record in driving business success
  • Hands-on programming experience in Apache Spark using SparkSQL and Spark Streaming or Apache Storm
  • Hands on experience with major components like Hive, PIG, Spark, MapReduce
  • Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
  • Experienced in Hadoop clustering and Auto scaling.
  • Good knowledge in apache Kafka & Apache Flume
  • Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions
  • Knowledge of Apache Oozie based workflow
  • Hands-on expertise in cloud services like AWS, or Microsoft Azure
  • Experience with databricks, glue, python, step functions or ADF
  • Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Hadoop and Cassandra.
  • Experience with BI, and data analytics databases
  • Experience in converting business problems/challenges to technical solutions considering security, performance, scalability etc.
  • Experience in Enterprise grade solution implementations.
  • Knowledge in Big data architecture patterns [Lambda, Kappa]
  • Experience in performance bench marking enterprise applications
  • Experience in Data security [on the move, at rest] and knowledge of data standards like APRA, BASEL etc
  • Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis.
  • Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud
  • Recommend design alternatives for data ingestion, processing and provisioning layers
  • Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop technologies
  • Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies
  • Strong UNIX operating system concepts and shell scripting knowledge
  • Knowledge of microservices and API development
  • Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
  • Excellent communicator (written and verbal formal and informal).
  • Ability to multi-task under pressure and work independently with minimal supervision.
  • Strong verbal and written communication skills.
  • Must be a team player and enjoy working in a cooperative and collaborative team environment.
  • Adaptable to new technologies and standards.
  • Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support.
  • Minimum 8 years hand-on experience in one or more of the above areas.
  • Minimum 12 years industry experience

EY | Building a better working world

EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets.

Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate.

Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Loading ...
Loading ...

Plazo: 14-07-2024

Haga clic para postularse como candidato gratuito

Aplicar

Loading ...
Loading ...

TRABAJOS SIMILARES

Loading ...
Loading ...