Откликайтесь
на вакансии с ИИ

Data Engineer (Contract)
Интересный проект с современным стеком технологий и возможностью удаленной работы. Высокий балл за стратегическую значимость роли, хотя статус контракта (4-6 месяцев) может быть минусом для тех, кто ищет стабильность.
Сложность вакансии
Высокая сложность обусловлена требованием к опыту более 10 лет и глубокой экспертизе в специфическом стеке (Databricks, Delta Lake, Spark). Роль предполагает не только разработку, но и стратегическое консультирование клиентов.
Анализ зарплаты
Для позиции уровня Senior/Lead Data Engineer в регионе LatAm на удаленной основе, рыночные ставки обычно выше локальных офисных зарплат, но ниже ставок в США. Предложенный диапазон отражает международный уровень для опытных контрактников.
Сопроводительное письмо
I am writing to express my strong interest in the Data Engineer contract position at Able. With over a decade of experience in building enterprise-scale data systems and a deep expertise in the Databricks ecosystem, I am confident in my ability to deliver the robust, scalable solutions your clients require. My background in designing modern lakehouse architectures using Delta Lake and PySpark aligns perfectly with the strategic technical leadership this role demands.
Throughout my career, I have excelled in client-facing roles, translating complex technical requirements into actionable business value. I have extensive experience managing ETL/ELT pipelines at scale, implementing Unity Catalog for governance, and optimizing cloud-native services across AWS and GCP. Furthermore, my familiarity with healthcare data standards like FHIR and HL7 would allow me to contribute effectively to specialized projects within your portfolio.
I am particularly drawn to Able’s mission of accelerating software development through applied AI. I am eager to bring my technical leadership to your Engineering team and help drive the next chapter of growth for your partners in the LatAm region. Thank you for your time and consideration.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в able уже сейчас
Присоединяйтесь к команде Able и создавайте масштабируемые архитектуры данных для ведущих венчурных фондов!
Описание вакансии
Data Engineer
Our Story
Over the past several years, Able has grown immeasurably. We’ve also grown in the type of company that we are:
Chapter 1: We were founded in 2013 as a product and engineering hub for a portfolio of early-stage start-ups. We grew up as an in-house/external hybrid shared services model. That allowed us to hone our skills and establish our operational and cultural foundation.
Chapter 2: In 2019 we began to expand our vision. We began to grow outside of our inset partner base. We had good initial success meeting new partners, kicking off new relationships, and delivering high-value work.
Chapter 3: In 2023, we moved into the next phase of a new chapter, an expansion of the ambition of Chapter 2. Our strategy for growth centers around two audiences:
- Venture Capital: VC firms are looking for trusted product and technology solutions to distribute seamlessly across their portfolios at scale.
- Private Equity: PE firms are looking for trusted solutions that can catalyze growth for their portfolio companies at scale.
Chapter 3a: We are now in the next phase of Chapter 3, aligned to our mission and vision, and accelerated by the powers of applied AI. We believe that AI will be a powerful force in the end-to-end software development lifecycle. Specifically we are creating practices that – coupled with our world class talent – can deliver software significantly faster than legacy techniques. The result is increased value for our partners, who can dramatically increase the capacity of their product organizations.
About the Role
Supporting the Director of Engineering and the broader Engineering team, this Data Engineer role will work cross-functionally with teams across Able while aligning closely with the Engineering discipline. This role will partner directly with a specific client and collaborate across Product, Design, and Engineering to deliver robust and scalable data solutions that meet critical business needs.
Day-to-Day Responsibilities
Strategic Architecture Leadership
- Shape large-scale data architecture vision and roadmap across client engagements
- Establish governance, security frameworks, and regulatory compliance standards
- Lead strategy around platform selection, integration, and scaling
- Guide organizations in adopting data lakehouse and federated data models
Client/Partner Value Creation
- Lead technical discovery sessions to understand client needs
- Translate complex architectures into clear, actionable value for stakeholders
- Build trusted advisor relationships and guide strategic decisions
- Align architecture recommendations with business growth and goals
Technical Architecture & Implementation
- Design and implement modern data lakehouse architectures with Delta Lake and Databricks
- Build and manage ETL/ELT pipelines at scale using Spark (PySpark preferred)
- Leverage Delta Live Tables, Unity Catalog, and schema evolution features
- Optimize storage and queries on cloud object storage (e.g., AWS S3, Azure Data Lake)
- Integrate with cloud-native services like AWS Glue, GCP Dataflow, and Azure Synapse Analytics
- Implement data quality monitoring, lineage tracking, and schema versioning
- Build scalable pipelines with tools like Apache Airflow, Step Functions, and Cloud Composer
Business Impact & Solution Design
- Develop cost-optimized, scalable, and compliant data solutions
- Design POCs and pilots to validate technical approaches
- Translate business requirements into production-ready data systems
- Define and track success metrics for platform and pipeline initiatives
What We’re Looking For
The ideal candidate will have:
- 10+ years of data engineering experience with enterprise-scale systems
- Expertise in Apache Spark and Delta Lake, including ACID transactions, time travel, Z-ordering, and compaction
- Deep knowledge of Databricks (Jobs, Clusters, Workspaces, Delta Live Tables, Unity Catalog)
- Experience building scalable ETL/ELT pipelines using tools like Airflow, Glue, Dataflow, or ADF
- Advanced SQL for data modeling and transformation
- Strong programming skills in Python (or Scala)
- Hands-on experience with data formats such as Parquet, Avro, and JSON
- Familiarity with schema evolution, versioning, and backfilling strategies
- Working knowledge of at least one major cloud platform:
- AWS (S3, Athena, Redshift, Glue Catalog, Step Functions)
- GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub) – nice to have
- Azure (Synapse, Data Factory, Azure Databricks) – nice to have
- Experience designing data architectures with real-time or streaming data (Kafka, Kinesis)
- Consulting or client-facing experience with strong communication and leadership skills
- Experience with data mesh architectures and domain-driven data design
- Knowledge of metadata management, data cataloging, and lineage tracking tools
- Familiarity with healthcare standards (e.g., HL7, FHIR, DICOM) is a plus
- Awareness of international data privacy regulations and compliant system design
Nice to have:
- Master's degree in Computer Science, Data Engineering, or related field
- ML Ops experience or integrating machine learning models into data pipelines
- Relevant certifications in cloud platforms or data engineering
This is a contract position. This position is 100% remote within LatAm. Strong verbal and written communication skills in English are a requirement.
This contract period is for 4-6 months, starting in August 2025. Candidates are expected to work 40 hours per week during this contract period, and be available during normal business hours as-needed on this project.
A contract extension is possible, pending our client partnership and individual performance. Those candidates interested in a future long term contract position would be considered an asset.
Able's Values
- Put People First: We're caring, open, and encouraging. We respect the richness that we each bring into our work.
- Imagine Better: We are optimistic in our outlook, as well as creative and proactive to deliver the highest quality.
- Expect Excellence: We commit to each other to always strive to be our best.
- Simplify to Solve: We create better outcomes by reducing complexity.
- We are all Builders: We are motivated and empowered to help build Able, and our partner's businesses.
- One Able. Many Voices: Our unity is our strength. Our diversity is our energy.
*Let’s build together.*
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Apache Spark
- Delta Lake
- Databricks
- Python
- SQL
- Apache Airflow
- AWS Glue
- PySpark
- ETL
- Data Architecture
- AWS S3
- Google BigQuery
- Azure Synapse Analytics
- Kafka
- HL7
- FHIR
Возможные вопросы на собеседовании
Учитывая акцент на Databricks, важно понимать, как кандидат управляет доступом и метаданными в масштабе организации.
Расскажите о вашем опыте внедрения Unity Catalog: с какими основными сложностями вы сталкивались при миграции или настройке управления данными?
Вакансия требует глубоких знаний Delta Lake для обеспечения надежности данных.
Как вы оптимизируете производительность Delta-таблиц? В каких случаях вы бы предпочли Z-Ordering вместо партиционирования?
Роль предполагает работу с клиентами и выбор технологий.
Опишите ситуацию, когда вам нужно было убедить стейкхолдеров со стороны клиента выбрать конкретную архитектуру данных (например, Lakehouse вместо традиционного DWH). Какие аргументы были решающими?
Работа с облачными хранилищами требует контроля затрат.
Какие стратегии оптимизации затрат вы применяете при проектировании крупномасштабных конвейеров обработки данных в AWS или Azure?
В описании упоминаются стандарты здравоохранения.
Есть ли у вас опыт работы с данными в формате FHIR или HL7, и какие специфические требования к качеству и безопасности данных это накладывает на ETL-процессы?
Похожие вакансии
Principal Data Scientist
Staff Data Scientist, Revenue
Research Data Scientist
Research Data Scientist
FullStack ML Developer
Data Engineer
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!