Job description
Overview
Are you passionate about transforming raw data into powerful insights that drive innovation and impact? Join a forward-thinking consultancy that combines strategy, design, and engineering to deliver cutting-edge digital solutions at scale.
This is a unique opportunity to work in a collaborative, cross-functional team environment where curiosity, creativity, and technical expertise are celebrated. You’ll help clients tackle complex challenges and adapt to a fast-changing world, using cloud technologies and modern data practices to make a lasting difference.
What You’ll Be Doing
Unfortunately this role cannot offer sponsosrship, as candidates must be SC eligible.
Are you passionate about transforming raw data into powerful insights that drive innovation and impact? Join a forward-thinking consultancy that combines strategy, design, and engineering to deliver cutting-edge digital solutions at scale.
This is a unique opportunity to work in a collaborative, cross-functional team environment where curiosity, creativity, and technical expertise are celebrated. You’ll help clients tackle complex challenges and adapt to a fast-changing world, using cloud technologies and modern data practices to make a lasting difference.
What You’ll Be Doing
- Design and deploy scalable data pipelines from ingestion to consumption using tools like Python, Scala, Spark, Java, and SQL.
- Integrate data engineering components into wider production systems in collaboration with software engineering teams.
- Work with large volumes of structured and unstructured data from diverse sources, applying robust data wrangling, cleaning, and transformation techniques.
- Develop solutions in AWS using services like EMR, Glue, RedShift, Kinesis, Lambda, and DynamoDB (or equivalent open-source tools).
- Apply your knowledge of batch and stream processing, and where applicable, contribute to data science and machine learning initiatives.
- Operate in Agile environments and actively participate in Scrum ceremonies.
- Use your understanding of best practices in cloud-native data architecture, including serverless and container-based approaches.
- Proven experience designing and building data pipelines and data architectures in cloud environments, particularly AWS.
- Strong coding ability in languages such as Python, Java, or Scala.
- Hands-on experience with data ingestion, transformation, and storage technologies.
- Familiarity with data visualization, reporting, and analytical tools.
- Comfortable working in Agile teams and contributing to all stages of development.
- Willingness to travel to client sites when necessary.
- Experience with AWS-native tools for data processing (EMR, Glue, RedShift, Kinesis, etc.).
- Familiarity with open-source equivalents is also welcome.
- Knowledge of machine learning, data mining, or natural language processing is a plus.
- Understanding of platform-as-a-service (PaaS) and serverless architectures.
Unfortunately this role cannot offer sponsosrship, as candidates must be SC eligible.