- Страна
- Великобритания
Откликайтесь
на вакансии с ИИ

Data Consultant
Отличная вакансия для опытных инженеров данных, желающих работать с современным стеком (Databricks, Spark, Cloud Native). Компания предлагает удаленный формат работы в Великобритании и сильную инженерную культуру, хотя отсутствие указанной зарплаты является небольшим минусом.
Сложность вакансии
Роль требует глубоких знаний в области современной архитектуры данных (Lakehouse, Medallion) и практического опыта работы с Databricks и Spark. Высокая планка ожиданий к инженерной культуре (CI/CD, SOLID, TDD) делает позицию сложной для кандидатов без опыта в software engineering.
Анализ зарплаты
Указанная роль Data Consultant в Великобритании обычно предполагает конкурентную оплату. Рыночные оценки для специалистов такого уровня (Middle/Senior) в сфере облачного консалтинга соответствуют заявленному диапазону, особенно при наличии экспертизы в Databricks.
Сопроводительное письмо
I am writing to express my strong interest in the Data Consultant position at Ensono. With a solid background in building scalable data pipelines and implementing medallion architectures, I am eager to contribute to your Data & AI competency. My expertise in Python, SQL, and Apache Spark, combined with hands-on experience in Databricks and Azure/AWS environments, aligns perfectly with the technical requirements of this role.
Throughout my career, I have focused on delivering production-ready data solutions that emphasize data quality and governance. I am particularly drawn to Ensono's commitment to disrupting the status quo and helping clients navigate complex cloud transformations. I am a proactive problem-solver who values clean, testable code and collaborative knowledge sharing, and I am confident that my technical skills and ownership mindset will make me a valuable asset to your cross-functional team.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в ensono уже сейчас
Присоединяйтесь к команде Ensono и создавайте передовые Data-решения на базе Databricks и облачных технологий!
Описание вакансии
At Ensono, our purpose is to be a relentless ally, disrupting the status quo and enabling our clients to Do Great Things. As a trusted technology adviser and managed services provider, we help clients navigate continuous change and embrace innovation.
We deliver world-class hybrid cloud, infrastructure, mainframe transformation, data, IDAM, and cloud-native solutions, simplifying complex business challenges and creating new pathways to success. Headquartered in the USA and backed by private equity, Ensono has a strong and growing presence in the UK and Europe.
What is the role about
This is a hands-on technical role within our Data & AI competency. You will join a cross-functional team of highly skilled, like-minded professionals, helping build and deliver modern data solutions for our clients – enabling them to realise the business value of their cloud investments.
As a Data Engineering Consultant, you will design, build, and maintain data pipelines and lakehouse architectures that power analytics, AI/ML, and operational decision-making. You will work across ingestion, transformation, storage, and serving layers – delivering solutions that are scalable, reliable, and cost-efficient.
What You’ll Deliver
- Cleaned, validated datasets and production-ready data pipeline code
- Contributions to requirements documents, data inventories, and technical specifications
- Data quality checks and monitoring dashboards for pipeline health
- Documentation supporting data governance, lineage, and cataloguing
What You’ll Be Doing:
- Building and maintaining data ingestion pipelines (batch, streaming, and micro-batch) using modern cloud-native tools and frameworks
- Developing and optimising transformations within lakehouse and medallion architecture patterns (bronze, silver, gold layers)
- Working with cloud data platforms such as Databricks, Apache Spark, and cloud-native services on AWS and/or Azure/Microsoft
- Implementing data quality checks, validation rules, and automated testing to ensure pipeline reliability
- Supporting data governance and compliance activities, including cataloguing, lineage tracking, and access control
- Collaborating with analysts, data scientists, and business stakeholders to understand requirements and deliver fit-for-purpose data products
- Contributing to technical documentation, architecture decision records, and runbooks
- Participating in code reviews, pair programming, and knowledge-sharing sessions to raise team capability
What you’ll Bring:
- Strong development skills in Python and SQL; experience writing clean, testable, well-documented code
- Hands-on experience building data pipelines and ETL/ELT workflows using tools such as Apache Spark, Databricks, or equivalent
- Understanding of lakehouse and data warehouse architecture patterns, including star schemas, medallion architecture, and data modelling best practices
- Experience working with cloud platforms (AWS, Azure, or GCP), including cloud storage, compute, and managed data services
- Familiarity with common big data file formats (e.g. Parquet, Delta, Avro) and concepts such as partitioning, compression, and columnar storage
- Good understanding of software engineering best practices: version control (Git), CI/CD, code review, SOLID principles, and DRY
- Clear communication skills – able to explain technical concepts to non-technical audiences and collaborate effectively across disciplines
- A proactive, self-starting attitude with a genuine interest in understanding the business context behind the data
Desirable Skills & Experience
- Experience with Databricks (Delta Live Tables, Unity Catalog, Workflows) or similar lakehouse platforms
- Familiarity with Infrastructure as Code tools (e.g. Terraform, CloudFormation) and container technologies (Docker, Kubernetes)
- Exposure to data governance, cataloguing, or data quality frameworks
- Experience with TDD/BDD testing practices for data pipelines
- Knowledge of streaming technologies (e.g. Kafka, Kinesis, Spark Structured Streaming)
- Additional programming experience in Scala or Java
- Relevant cloud or data certifications (e.g. Databricks Data Engineer, AWS Data Analytics, Azure Data Engineer)
Personal & Leadership Qualities
At Ensono Digital, we place as much value on how you work as on what you deliver. The following qualities reflect our expectations for Consultants and are aligned with our Personal and Leadership Frameworks:
- Coachable and curious: You actively seek feedback, learn from experienced colleagues, and reflect on how to improve.
- Team-oriented: You contribute positively to team culture, support new joiners, and collaborate openly – building trust through reliability and approachability.
- Ownership mindset: You take responsibility for your work, communicate blockers early, and deliver to the quality and timelines expected.
- Adaptable learner: You embrace new tools, methods, and feedback. You take initiative in your own development and are energised by the pace of change in cloud and data.
- Business-aware: You seek to understand the “why” behind your work – connecting your technical delivery to client goals and business value.
- Clear communicator: You express ideas clearly in writing and conversation, listen actively, and ask good questions to build shared understanding.
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Git
- AWS
- Azure
- Python
- Terraform
- SQL
- Kubernetes
- CI/CD
- Docker
- Delta Lake
- Apache Spark
- Kafka
- Databricks
- ETL
- Data Modeling
- ELT
Возможные вопросы на собеседовании
Проверка понимания архитектурных паттернов, указанных в описании вакансии.
Расскажите о вашем опыте внедрения Medallion Architecture (Bronze/Silver/Gold). Какие основные задачи решает каждый слой?
Databricks является ключевым инструментом для этой роли.
Как вы используете Unity Catalog для управления данными и обеспечения безопасности в Databricks?
Вакансия делает упор на качество кода и инженерные практики.
Как вы организуете процесс тестирования (TDD/BDD) для ETL-пайплайнов, чтобы гарантировать надежность данных?
Проверка навыков оптимизации производительности при работе с большими данными.
Какие стратегии партиционирования и оптимизации (например, Z-Order) вы применяете при работе с форматом Delta Lake?
Оценка способности кандидата связывать технические задачи с бизнес-целями.
Приведите пример, когда ваше техническое решение напрямую повлияло на достижение бизнес-целей клиента или оптимизацию затрат.
Похожие вакансии
Data аналитик Senior
Продуктовый аналитик (middle)
Аналитик данных
Data аналитик (Senior)
Аналитик данных (Data Analyst)
DWH аналитик
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- Великобритания