- Страна
- Великобритания
Откликайтесь
на вакансии с ИИ

Senior Data Engineer— AdTech Data Platform
Отличная вакансия в публичной компании (NYSE: ZETA) с сильным технологическим стеком и сложными инженерными задачами. Работа с AI и Big Data на таком уровне — это значительный буст для карьеры Senior инженера.
Сложность вакансии
Высокая сложность обусловлена требованиями к опыту (7+ лет) и необходимостью работать с экстремальными нагрузками (миллиарды событий в день). Требуется глубокое владение стеком Kafka, Flink/Spark и опыт в AdTech-домене.
Анализ зарплаты
Зарплата в объявлении не указана, но для позиции Senior Data Engineer в Лондоне рыночный диапазон составляет £85,000 – £120,000 в год. В крупных технологических компаниях уровня Zeta Global итоговое вознаграждение (включая бонусы и акции) может быть выше среднего по рынку.
Сопроводительное письмо
I am writing to express my strong interest in the Senior Data Engineer position at Zeta Global. With over 7 years of experience in building production-grade data pipelines and a deep expertise in streaming technologies like Kafka and Flink, I am confident in my ability to contribute significantly to your AdTech Data Platform. My background in designing canonical aggregates and optimizing high-volume event processing aligns perfectly with Zeta's mission to unify identity and intelligence at scale.
Throughout my career, I have focused on bridging the gap between raw data and actionable insights, specifically for ML features and BI reporting. I have extensive experience with AWS ecosystems, Snowflake, and orchestrating complex workflows using Airflow. I am particularly excited about the opportunity to work on "extreme scale" systems processing billions of events daily and to help Zeta continue its innovation in the AI-powered marketing cloud space.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в zetaglobal уже сейчас
Присоединяйтесь к Zeta Global и создавайте высоконагруженные системы обработки данных для мирового лидера в сфере AI-маркетинга!
Описание вакансии
WHO WE ARE
Zeta Global (NYSE: ZETA) is the AI-Powered Marketing Cloud that leverages advanced artificial intelligence (AI) and trillions of consumer signals to make it easier for marketers to acquire, grow, and retain customers more efficiently. Through the Zeta Marketing Platform (ZMP), our vision is to make sophisticated marketing simple by unifying identity, intelligence, and omnichannel activation into a single platform – powered by one of the industry’s largest proprietary databases and AI. Our enterprise customers across multiple verticals are empowered to personalize experiences with consumers at an individual level across every channel, delivering better results for marketing programs. Zeta was founded in 2007 by David A. Steinberg and John Sculley and is headquartered in New York City with offices around the world. To learn more, go to www.zetaglobal.com.
The Role
We’re looking for a Senior Data Engineer to design, build, and operate the data processing layer that powers Zeta’s AdTech platform. This is a hands-on role focused on streaming + batch pipelines, producing trusted, reusable aggregates that serve multiple downstream consumers including prediction/ML features, agentic workflows, BI reporting, and measurement.
What You’ll Do
- Build streaming pipelines: Ingest and process high-volume event data using Kafka/Kinesis, handling schema evolution, event-time processing, late data, and deduplication.
- Create canonical aggregates: Produce durable, well-defined rollups (campaign, audience, creative, inventory, pacing/spend, conversions, measurement) with consistent semantics and SLAs.
- Enable prediction & agents: Deliver feature-ready datasets and near-real-time signals to support model training/scoring, retrieval, and agent decision loops.
- Support BI & reporting: Publish governed datasets to analytics systems and warehouses for dashboards, ad-hoc queries, and operational reporting.
- Measurement-grade reliability: Implement reconciliation, backfills, audit trails, and quality checks to ensure correctness for reporting and measurement.
- Optimize performance & cost: Tune pipeline throughput/latency, storage formats, partitioning, and compute spend across streaming and batch workloads.
- Operational excellence & observability: Instrument pipelines with metrics/logs/traces, define SLIs/SLOs, and drive fast detection and root-cause analysis.
- Collaborate cross-functionally: Partner with Backend, ML/DS, Analytics, and Platform/SRE to define contracts, schemas, and robust data products.
Required Qualifications
- 7+ years building and operating production-grade data pipelines.
- Strong experience with streaming systems: Kafka (preferred) or AWS Kinesis, and event-driven architectures.
- Hands-on experience with processing frameworks such as Flink, Spark (Structured Streaming), Beam, or equivalent.
- Proficiency in Python and/or Java/Scala (Go is a plus).
- Strong SQL skills and experience with data modeling for analytics and aggregates.
- Strong experience with AWS and cloud-native data patterns (S3 + compute/orchestration services).
- Experience with data warehouses / OLAP (e.g., Snowflake/Redshift/BigQuery) and/or real-time analytics stores (e.g., ClickHouse/Druid).
- Familiarity with SQL + NoSQL ecosystems (e.g., Postgres/MySQL + DynamoDB/Cassandra/Redis) for serving/lookup patterns.
- Experience with orchestration and CI/CD for data pipelines (Airflow/Argo/Step Functions or equivalents).
- Clear communicator and collaborator; able to explain data systems and trade-offs to mixed audiences.
Preferred Qualifications
- Programmatic advertising domain knowledge: event pipelines for impressions/clicks/conversions, attribution/measurement, pacing/budget signals.
- Experience building feature stores or ensuring online/offline parity for ML features.
- Lakehouse experience (Delta/Iceberg/Hudi), incremental processing, and backfill strategies at scale.
- Strong data governance practices: lineage, access controls, PII handling, privacy-by-design.
- Experience operating at “extreme scale” (billions of events/day) and optimizing cost/performance.
BENEFITS & PERKS
- Excellent medical, dental, and vision coverage
PEOPLE & CULTURE AT ZETA
Zeta considers applicants for employment without regard to, and does not discriminate on the basis of an individual’s sex, race, color, religion, age, disability, status as a veteran, or national or ethnic origin; nor does Zeta discriminate on the basis of sexual orientation, gender identity or expression.
We’re committed to building a workplace culture of trust and belonging, so everyone feels invited to bring their whole selves to work. We provide a forum for employees to celebrate, support and advocate for one another. Learn more about our commitment to diversity, equity and inclusion here: https://zetaglobal.com/blog/a-look-into-zetas-ergs/
ZETA IN THE NEWS!
https://zetaglobal.com/press/?cat=press-releases
#LI-NP1
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- AWS
- Python
- SQL
- CI/CD
- PostgreSQL
- Redis
- Snowflake
- Delta Lake
- Apache Spark
- Kafka
- Java
- Apache Iceberg
- Apache Flink
- MySQL
- DynamoDB
- Apache Airflow
- Google BigQuery
- Amazon S3
- Amazon Redshift
- Scala
- Cassandra
- ClickHouse
- Apache Druid
- Apache Beam
- Argo
- AWS Kinesis
Возможные вопросы на собеседовании
Вакансия подразумевает работу с миллиардами событий. Важно понимать, как кандидат обеспечивает отказоустойчивость и точность данных.
Как вы организуете обработку 'поздних' данных (late data) в потоковых пайплайнах на базе Flink или Spark Streaming?
Роль требует создания канонических агрегатов для разных потребителей.
Опишите ваш подход к проектированию схемы данных, которая должна одновременно служить и для ML-фичей, и для BI-отчетности.
Оптимизация затрат указана как одна из ключевых задач.
Какие стратегии вы используете для оптимизации стоимости хранения и вычислений в AWS при работе с петабайтными объемами данных?
Упоминается использование Lakehouse архитектур.
В чем преимущества использования Iceberg или Delta Lake в сравнении с обычным S3/Parquet для ваших текущих задач?
AdTech требует высокой точности в финансовых метриках (pacing/spend).
Как вы реализуете механизмы сверки (reconciliation) между стриминговыми данными и данными в хранилище для обеспечения финансовой точности?
Похожие вакансии
Senior Data Analyst
DWH аналитик на банковский проект
Data аналитик (Senior)
Data аналитик (Senior)
Senior/middle DWH разработчик / Data Engineer
Data аналитик Senior
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- Великобритания