yandex
D
devsavant
+500% приглашений

Откликайтесь
на вакансии с ИИ

Ускорим процесс поиска работы
SeniorУдалённоПолная занятость

Senior Data Engineer (AI Enablement)

Оценка ИИ

Привлекательная позиция для опытных инженеров, желающих работать на стыке Data Engineering и AI. Удаленный формат работы в международной среде и современный стек технологий (GCP, DBT, Airflow) делают вакансию очень перспективной.


Вакансия из Quick Offer Global, списка международных компаний
Пожаловаться

Сложность вакансии

ЛегкоСложно
Оценка ИИ

Роль требует глубоких знаний в области инженерии данных, включая ETL, Airflow и DBT, а также специфического опыта работы с геопространственными данными и инструментами ИИ. Высокий уровень ответственности за архитектуру в быстро меняющейся среде стартапов повышает сложность.

Анализ зарплаты

Медиана6 500 $
Рынок5 000 $ – 8 500 $
Оценка ИИ

Зарплата для Senior Data Engineer в регионе LATAM (Латинская Америка) при работе на компании из США обычно выше локального рынка, но ниже уровня Кремниевой долины. Указанный диапазон отражает рыночные стандарты для опытных специалистов на удаленке.

Сопроводительное письмо

I am writing to express my strong interest in the Senior Data Engineer (AI Enablement) position at DevSavant. With over 5 years of experience in building scalable data architectures and a deep proficiency in Python and SQL, I am excited about the opportunity to contribute to your AI-enabled data workflows. My background in orchestrating complex ETL pipelines using Airflow and DBT, combined with my experience in handling diverse data formats, aligns perfectly with your technical requirements.

I am particularly drawn to DevSavant's mission of supporting growth-stage companies and your focus on integrating AI tools like Copilot and MCP servers into the development lifecycle. Having worked extensively with PostgreSQL and GCP/BigQuery, I am confident in my ability to optimize your data systems for performance and reliability. I thrive in fast-paced, remote-first environments and look forward to bringing my 'builder mindset' to your cross-functional teams to help scale high-performing data solutions.

+250% к просмотрам

Составьте идеальное письмо к вакансии с ИИ-агентом

Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в devsavant уже сейчас

Присоединяйтесь к DevSavant, чтобы строить будущее данных и ИИ в динамичной среде стартапов!

Описание вакансии

About DevSavant

DevSavant is an operating partner for startups and growth-stage companies, helping them turn ambition into execution.

We support founders and leadership teams with product engineering and global staffing, from early prototypes and MVPs to scaling high-performing teams. Our vetted talent across LATAM and Asia embeds directly into client teams, operating as true extensions rather than external vendors.

With over 8 years working in venture-backed ecosystems, DevSavant is trusted to accelerate delivery, scale teams efficiently, and support companies as they reach their next milestone.

About the Role

We are seeking a Data Engineer to join our growing team of data experts. This is an individual contributor role embedded within cross-functional teams, focused on building and maintaining the data infrastructure that powers our analytics and business intelligence platforms.

The role is heavily data-oriented, with a strong emphasis on designing and developing scalable, reliable data pipelines and systems using Python and SQL. You will be responsible for ensuring that business-critical data is accurate, accessible, and optimized for downstream use by software developers, analysts, data scientists, and other stakeholders.

The ideal candidate is a hands-on data engineer who thrives in fast-paced environments, takes ownership, and is comfortable working with evolving requirements. You enjoy building robust data systems from the ground up, integrating new datasets, and continuously optimizing data infrastructure for performance and scalability.

Key Responsibilities

AI & Automation

  • Contribute to AI-enabled data workflows, including integration with agents and MCP servers
  • Leverage AI tools (e.g., Copilot, Openspec) to automate aspects of the software development lifecycle
  • Instrument data systems and pipelines with automation, monitoring, and intelligent workflows

Data Engineering & Pipeline Development

  • Build and maintain scalable data pipelines using Python, SQL, and modern ETL frameworks
  • Design and implement robust data architectures that support business and analytical needs
  • Assemble large, complex datasets that meet functional and non-functional requirements
  • Optimize data systems for performance, reliability, and scalability
  • Write clean, maintainable, and well-tested code following best practices
  • Continuously improve data engineering standards and processes

Data Infrastructure & Integration

  • Develop infrastructure for efficient extraction, transformation, and loading (ETL) of data from diverse sources
  • Integrate structured and unstructured data formats (e.g., CSV, Excel, Shapefiles) into centralized systems
  • Maintain and optimize databases containing customer usage, financial, and operational data
  • Integrate and optimize data access across platforms, including analytical tools such as QGIS
  • Maintain and improve search indices using both COTS and custom-built solutions

Analytics Enablement & Stakeholder Support

  • Collaborate with analysts, data scientists, and business stakeholders to support data needs
  • Build and maintain data tools that empower analysts to explore and optimize datasets
  • Assist stakeholders with data-related technical challenges and infrastructure needs
  • Support analytical workflows, including SQL query development and dataset preparation
  • Partner with data and analytics teams to enhance overall system capabilities

Monitoring, Reliability & Operations

  • Monitor data systems and pipelines to ensure high availability and reliability
  • Perform root cause analysis on data and system issues and implement corrective actions
  • Improve observability and alerting for data infrastructure
  • Maintain operational excellence across data platforms

Collaboration & Execution

  • Work closely with cross-functional teams in a distributed, remote-first environment
  • Translate evolving business requirements into scalable data solutions
  • Take ownership of data systems from design through production
  • Operate effectively in a fast-paced environment with changing priorities

Core Technical Stack

Data & Backend

  • Python for data processing and pipeline development
  • SQL for querying and data transformation
  • PostgreSQL (preferred) and other relational databases
  • ETL orchestration tools such as Airflow or Cloud Composer
  • Data transformation tools such as DBT

Data Platforms & Tools

  • Geospatial tools and analytical platforms (e.g., QGIS)
  • Handling of structured and unstructured data formats (CSV, Excel, Shapefiles)
  • Search and indexing technologies (COTS and custom solutions)

Cloud & Infrastructure

  • Cloud platforms (GCP preferred; BigQuery experience is a strong plus)
  • Familiarity with data warehouses, data lakes, and distributed data systems
  • Message queuing and stream processing systems

DevOps & Tooling

  • Linux-based environments and shell scripting
  • Version control, CI/CD practices, and automated workflows
  • AI-powered development tools (e.g., GitHub Copilot, Openspec)

Required Qualifications

  • 3–5 years of experience in data engineering, data pipelines, or related fields
  • Strong proficiency in SQL and experience working with relational databases (PostgreSQL preferred)
  • Advanced experience using Python for data processing (including spatial and non-spatial data)
  • Experience building and optimizing data pipelines and architectures
  • Hands-on experience with ETL orchestration tools (Airflow or Cloud Composer preferred)
  • Experience with data transformation tools (DBT preferred)
  • Experience working with unstructured and legacy data formats
  • Strong analytical skills and experience working with large, complex datasets
  • Experience performing root cause analysis and improving data processes
  • Familiarity with distributed systems, message queues, or stream processing
  • Experience working in Linux environments and using command-line tools
  • Strong communication skills and ability to collaborate across teams
  • Proactive mindset with a focus on ownership and continuous improvement

Nice to Have

  • Experience with GCP and BigQuery administration
  • Experience with geospatial data and tools
  • Familiarity with AI-enabled data systems and agent-based architectures
  • Experience automating SDLC processes using AI tools
  • Experience integrating data platforms with analytics and BI tools
  • Experience in high-growth or fast-paced environments

Qualities We're Looking For

  • Results-driven mindset (GTD): Ability to identify next actions, communicate clearly, and execute efficiently
  • Ownership mentality: Strong sense of accountability and decision-making ability
  • Builder mindset: Passion for creating scalable, impactful data solutions
  • Curiosity and continuous improvement: Always seeking better ways to solve problems
  • Team collaboration: Comfortable working across teams and supporting diverse stakeholders
  • Bonus: You enjoy coffee, love software and products, and bring a good sense of humor
+400% к собеседованиям

Создайте идеальное резюме с помощью ИИ-агента

Создайте идеальное резюме с помощью ИИ-агента

Навыки

  • Python
  • Linux
  • GCP
  • SQL
  • dbt
  • CI/CD
  • PostgreSQL
  • BigQuery
  • Airflow
  • GitHub Copilot
  • QGIS
  • Cloud Composer

Возможные вопросы на собеседовании

Проверка опыта работы с основным стеком и понимания архитектуры пайплайнов.

Расскажите о самом сложном ETL-пайплайне, который вы спроектировали на Python и Airflow: с какими проблемами масштабируемости вы столкнулись?

Вакансия делает упор на AI Enablement.

Как вы планируете использовать AI-инструменты (например, GitHub Copilot или MCP серверы) для автоматизации жизненного цикла разработки данных?

В описании упоминаются QGIS и Shapefiles.

Какой у вас опыт работы с геопространственными данными и как вы оптимизируете SQL-запросы для обработки больших объемов пространственной информации?

Важно для обеспечения надежности систем.

Опишите ваш подход к обеспечению качества данных (Data Quality) и мониторингу пайплайнов в облачной инфраструктуре (GCP/BigQuery).

Проверка навыков решения проблем в реальных условиях.

Приведите пример случая, когда вам пришлось проводить Root Cause Analysis критического сбоя в данных. Как вы предотвратили повторение ситуации?

Похожие вакансии

NDA
173 416 ₽ – 306 000 ₽

Инженер Mlops (Senior)

SeniorУдалённоРоссия
MLOps · Kubernetes · Docker · Helm · Jenkins · GitLab CI · Python · Airflow · JupyterHub · MLflow · Seldon · CUDA · Hadoop · Apache Spark · Apache Kafka · ELK stack · RAG · LLMOps · AutoML · Computer Vision
+20 навыков
V
Volna.tech
386 000 ₽ – 436 000 ₽

Senior MLOps Engineer (Platform Development / LLMOps)

SeniorУдалённоРоссия
Docker · Helm · Jenkins · GitLab CI · Python · Airflow · JupyterHub · MLflow · Seldon · CUDA · Kubernetes · Hadoop · Spark · Kafka · ELK · LLMOps · RAG
+17 навыков
Y
YCLIENTS
до 450 000 ₽

Senior Data Engineer

SeniorУдалённоРоссия
SQL · Python · Apache Airflow · ClickHouse · Apache Iceberg · S3 · Debezium · Apache Kafka · Trino · CDC · ETL
+11 навыков
S
Strikt
до 300 000 ₽

Data Scientist Senior (Part-time)

SeniorУдалённоРоссия
Python · Machine Learning · Time Series Analysis · SQL · Pandas · NumPy · Matplotlib · Scikit-learn · Jupyter Notebook · Clustering · K-Means · DBSCAN
+12 навыков
OS
Omega Solutions
2 000 ₽ – 2 700 ₽

Senior Data инженер

SeniorУдалённоРоссия
Java · Groovy · Hadoop · ETL · DWH · SQL · Docker · Apache NiFi · Airflow · SAP HANA · Apache Kafka · Apache Iceberg · Python · Go · Linux · Ansible · Zabbix · HDFS · Hive
+19 навыков
C
Centicore
340 000 ₽ – 380 000 ₽

Senior MLOps

SeniorУдалённоРоссия
Docker · Helm · Jenkins · GitLab CI · Python · Airflow · JupyterHub · MLflow · Seldon Core · CUDA · Kubernetes · Hadoop · Apache Spark · Apache Kafka · ELK stack
+15 навыков
более 1000 офферов получено
4.9

1000+ офферов получено

Устали искать работу? Мы найдём её за вас

Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!

D
devsavant