- Страна
- Германия
Откликайтесь
на вакансии с ИИ

Senior Data Engineer— AdTech Data Platform
Отличная вакансия в публичной компании (NYSE: ZETA) с сильным технологическим стеком и сложными задачами. Работа в Берлине в сфере AdTech обычно предлагает конкурентную среду и профессиональный рост.
Сложность вакансии
Высокая сложность обусловлена требованиями к опыту (7+ лет) и необходимостью работать с экстремальными нагрузками (миллиарды событий в день). Требуется глубокое знание стека Kafka, Flink/Spark и опыт в AdTech.
Анализ зарплаты
Предлагаемая позиция Senior уровня в Берлине соответствует рыночному диапазону 85,000 – 105,000 EUR. В крупных международных компаниях вроде Zeta Global зарплата может быть выше среднего за счет бонусных программ и акций (RSU).
Сопроводительное письмо
I am writing to express my strong interest in the Senior Data Engineer position at Zeta Global. With over 7 years of experience in building production-grade data pipelines and a deep expertise in streaming technologies like Kafka and Flink, I am confident in my ability to contribute significantly to your AdTech Data Platform. My background in processing high-volume event data and creating canonical aggregates aligns perfectly with your mission to unify identity and intelligence through the Zeta Marketing Platform.
In my previous roles, I have successfully optimized large-scale pipelines, reducing latency and compute costs while ensuring data reliability for ML features and BI reporting. I am particularly drawn to Zeta's focus on 'extreme scale' and the challenge of handling billions of events daily. I am eager to bring my skills in Python, Scala, and AWS cloud-native patterns to your team to help drive operational excellence and deliver high-quality data products that empower your enterprise customers.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в zetaglobal уже сейчас
Присоединяйтесь к Zeta Global и создавайте высоконагруженные системы обработки данных для мирового лидера в области AI-маркетинга!
Описание вакансии
WHO WE ARE
Zeta Global (NYSE: ZETA) is the AI-Powered Marketing Cloud that leverages advanced artificial intelligence (AI) and trillions of consumer signals to make it easier for marketers to acquire, grow, and retain customers more efficiently. Through the Zeta Marketing Platform (ZMP), our vision is to make sophisticated marketing simple by unifying identity, intelligence, and omnichannel activation into a single platform – powered by one of the industry’s largest proprietary databases and AI. Our enterprise customers across multiple verticals are empowered to personalize experiences with consumers at an individual level across every channel, delivering better results for marketing programs. Zeta was founded in 2007 by David A. Steinberg and John Sculley and is headquartered in New York City with offices around the world. To learn more, go to www.zetaglobal.com.
The Role
We’re looking for a Senior Data Engineer to design, build, and operate the data processing layer that powers Zeta’s AdTech platform. This is a hands-on role focused on streaming + batch pipelines, producing trusted, reusable aggregates that serve multiple downstream consumers including prediction/ML features, agentic workflows, BI reporting, and measurement.
What You’ll Do
- Build streaming pipelines: Ingest and process high-volume event data using Kafka/Kinesis, handling schema evolution, event-time processing, late data, and deduplication.
- Create canonical aggregates: Produce durable, well-defined rollups (campaign, audience, creative, inventory, pacing/spend, conversions, measurement) with consistent semantics and SLAs.
- Enable prediction & agents: Deliver feature-ready datasets and near-real-time signals to support model training/scoring, retrieval, and agent decision loops.
- Support BI & reporting: Publish governed datasets to analytics systems and warehouses for dashboards, ad-hoc queries, and operational reporting.
- Measurement-grade reliability: Implement reconciliation, backfills, audit trails, and quality checks to ensure correctness for reporting and measurement.
- Optimize performance & cost: Tune pipeline throughput/latency, storage formats, partitioning, and compute spend across streaming and batch workloads.
- Operational excellence & observability: Instrument pipelines with metrics/logs/traces, define SLIs/SLOs, and drive fast detection and root-cause analysis.
- Collaborate cross-functionally: Partner with Backend, ML/DS, Analytics, and Platform/SRE to define contracts, schemas, and robust data products.
Required Qualifications
- 7+ years building and operating production-grade data pipelines.
- Strong experience with streaming systems: Kafka (preferred) or AWS Kinesis, and event-driven architectures.
- Hands-on experience with processing frameworks such as Flink, Spark (Structured Streaming), Beam, or equivalent.
- Proficiency in Python and/or Java/Scala (Go is a plus).
- Strong SQL skills and experience with data modeling for analytics and aggregates.
- Strong experience with AWS and cloud-native data patterns (S3 + compute/orchestration services).
- Experience with data warehouses / OLAP (e.g., Snowflake/Redshift/BigQuery) and/or real-time analytics stores (e.g., ClickHouse/Druid).
- Familiarity with SQL + NoSQL ecosystems (e.g., Postgres/MySQL + DynamoDB/Cassandra/Redis) for serving/lookup patterns.
- Experience with orchestration and CI/CD for data pipelines (Airflow/Argo/Step Functions or equivalents).
- Clear communicator and collaborator; able to explain data systems and trade-offs to mixed audiences.
Preferred Qualifications
- Programmatic advertising domain knowledge: event pipelines for impressions/clicks/conversions, attribution/measurement, pacing/budget signals.
- Experience building feature stores or ensuring online/offline parity for ML features.
- Lakehouse experience (Delta/Iceberg/Hudi), incremental processing, and backfill strategies at scale.
- Strong data governance practices: lineage, access controls, PII handling, privacy-by-design.
- Experience operating at “extreme scale” (billions of events/day) and optimizing cost/performance.
BENEFITS & PERKS
- Excellent medical, dental, and vision coverage
PEOPLE & CULTURE AT ZETA
Zeta considers applicants for employment without regard to, and does not discriminate on the basis of an individual’s sex, race, color, religion, age, disability, status as a veteran, or national or ethnic origin; nor does Zeta discriminate on the basis of sexual orientation, gender identity or expression.
We’re committed to building a workplace culture of trust and belonging, so everyone feels invited to bring their whole selves to work. We provide a forum for employees to celebrate, support and advocate for one another. Learn more about our commitment to diversity, equity and inclusion here: https://zetaglobal.com/blog/a-look-into-zetas-ergs/
ZETA IN THE NEWS!
https://zetaglobal.com/press/?cat=press-releases
#LI-NP1
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- AWS
- Python
- SQL
- CI/CD
- PostgreSQL
- Snowflake
- Airflow
- Apache Spark
- Kafka
- Java
- Apache Flink
- DynamoDB
- Amazon S3
- Scala
- ClickHouse
Возможные вопросы на собеседовании
Вакансия подразумевает работу с миллиардами событий. Важно понимать, как кандидат обеспечивает отказоустойчивость и точность данных.
Как вы подходите к обработке поздних данных (late data) и дедупликации в стриминговых пайплайнах на базе Kafka и Flink?
Роль требует создания агрегатов для разных потребителей. Проверяется навык проектирования универсальных схем.
Опишите ваш подход к проектированию канонических агрегатов, которые должны одновременно служить и для ML-фичей, и для BI-отчетности.
Оптимизация затрат указана как одна из ключевых задач.
Какие стратегии вы использовали для оптимизации стоимости хранения и вычислений в AWS при работе с терабайтными объемами данных?
AdTech требует высокой точности для финансовых показателей (pacing/spend).
Как вы организуете процессы сверки (reconciliation) и аудита данных, чтобы гарантировать точность отчетов о затратах и конверсиях?
Проверка опыта работы с современными форматами хранения.
Каков ваш опыт работы с табличными форматами Iceberg или Delta Lake? В каких случаях вы бы предпочли их обычному Parquet на S3?
Похожие вакансии
Senior Data Analyst
DWH аналитик на банковский проект
Data аналитик (Senior)
Data аналитик (Senior)
Senior/middle DWH разработчик / Data Engineer
Data аналитик Senior
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- Германия