Откликайтесь
на вакансии с ИИ

Senior Data Engineer
Привлекательная вакансия в стабильной компании с хорошим финансированием ($100M+). Предлагается работа с современным стеком технологий и возможность получения опционов, что редкость для регионального найма.
Сложность вакансии
Роль требует глубоких экспертных знаний в области распределенных вычислений (PySpark), облачных хранилищ (Snowflake, Databricks) и инфраструктуры как кода (Terraform). Высокая планка обусловлена необходимостью не только разработки, но и архитектурного проектирования сложных пайплайнов в высоконагруженной среде.
Анализ зарплаты
Предлагаемая позиция Senior уровня в Непале для международной компании обычно оплачивается значительно выше локального рынка, приближаясь к международным стандартам для удаленных сотрудников или аутсорс-хабов. Указанный диапазон отражает рыночные ожидания для опытных инженеров в этом регионе, работающих на западные продуктовые компании.
Сопроводительное письмо
I am writing to express my strong interest in the Senior Data Engineer position at Abacus Insights. With over 5 years of experience in building large-scale distributed data systems and a deep expertise in PySpark, Databricks, and Snowflake, I am confident in my ability to drive meaningful technical impact within your Tech Ops division. My background in architecting end-to-end ingestion frameworks and optimizing cloud-native data platforms aligns perfectly with your mission to transform healthcare data into a usable, trusted foundation.
Throughout my career, I have focused on establishing engineering best practices, including CI/CD for data pipelines and automated testing, which I see is a key focus for this role. I am particularly drawn to Abacus Insights because of your commitment to breaking down data silos and enabling GenAI use cases in the healthcare industry. I am eager to bring my technical leadership and passion for performance optimization to your team to help improve outcomes and deliver better experiences for members and providers alike.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в abacusinsights уже сейчас
Присоединяйтесь к Abacus Insights, чтобы строить будущее здравоохранения на основе данных и получить долю в успехе компании через опционы!
Описание вакансии
About Us
Abacus Insights is transforming how data works for health plans. Our mission is simple: make healthcare data usable, so the people responsible for care and cost decisions can act faster, with confidence.
We help health plans break down data silos to create a single, trusted data foundation. That foundation powers better decisions —so plans can improve outcomes, reduce waste, and deliver better experiences for members and providers alike.
Backed by $100M from top investors, we’re tackling big challenges in an industry that’s ready for change. Our platform enables GenAI use cases by delivering clean, connected, and reliable healthcare data that can support automation, prioritization, and decision workflows—and it’s why we are leading the way.
Our innovation begins with people. We are bold, curious, and collaborative—because the best ideas come from working together. Ready to make an impact? Join us and let's build the future together.
About the role
We are seeking an accomplished Senior Data Engineer to join our dynamic and rapidly expanding Tech Ops division. With significant projected growth, this is an opportunity to drive meaningful technical impact. In this role, you will work with internal engineering teams to design, implement, and optimize complex data integration solutions within a modern, large‑scale cloud environment.
You will leverage advanced skills in distributed computing, data architecture, and cloud-native engineering to enable scalable, resilient, and high‑performance data ingestion and transformation pipelines. As a trusted technical advisor, you will ensure high-quality, compliant data operations across the lifecycle.
Your day to day
- Architect, design, and implement high‑volume batch and real‑time data pipelines using PySpark, SparkSQL, Databricks Workflows, and distributed processing frameworks.
- Build end‑to‑end ingestion frameworks integrating Databricks, Snowflake, AWS services (S3, SQS, Lambda), and vendor APIs, ensuring data quality, lineage, and schema evolution.
- Design and optimize data models (star/snowflake schemas) and apply performance tuning techniques for analytical workloads on cloud data warehouses.
- Translate complex business requirements into detailed technical specifications, reusable engineering components, and implementation artifacts.
- Establish and enforce data engineering best practices, including CI/CD for data pipelines, version control, automated testing, orchestration, logging, and observability.
- Drive performance and cost optimization through profiling, cluster tuning, partitioning, indexing, caching, and compute optimization across Databricks and Snowflake.
- Ensure operational excellence and team growth by producing high‑quality documentation (runbooks, architecture diagrams), monitoring and troubleshooting production pipelines, performing root‑cause analysis, and mentoring junior engineers.
What you bring to the team
- Bachelor’s degree in Computer Science, Computer Engineering, or a closely related technical field, with 5+ years of hands‑on experience as a Data Engineer building and operating large‑scale, distributed data systems in modern cloud environments.
- Proven ability to clearly communicate complex technical concepts and solutions to both technical and non‑technical stakeholders.
- Expert‑level proficiency in Python, SQL, and PySpark, including development of distributed transformations and performance‑optimized queries.
- Demonstrated experience designing, building, and operating ETL/ELT pipelines using Databricks, Airflow, or similar orchestration and workflow automation tools.
- Proven experience architecting or operating large‑scale data platforms using DBT, Kafka, Delta Lake, and event‑driven or streaming architectures in cloud‑native data or platform engineering environments.
- Strong working knowledge of AWS data services (S3, SQS, Lambda, Glue, IAM or equivalents), structured and semi‑structured data formats (Parquet, ORC, JSON, Avro), schema evolution, and optimization techniques.
- Hands‑on experience with Terraform and CI/CD pipelines (e.g., GitLab), deep expertise in SQL and compute optimization (partitioning, clustering, Z‑Ordering, pruning, caching), and performance tuning on cloud data warehouses such as Snowflake (preferred), BigQuery, or Redshift
What we would like to see but not required:
- Working knowledge of U.S. healthcare data domains—including claims, eligibility, and provider datasets—and experience applying this knowledge to complex ingestion and transformation workflows.
What you’ll get in return
- Competitive Leave & Benefits
- Comprehensive health coverage
- Equity for every employee – share in our success
- Growth-focused environment – your development matters here
Working Arrangements
- Standard hours: 9 hours/day, 5 working days
- Location: Onsite
- Shift: 10 AM – 7 PM local time
Our Commitment as an Equal Opportunity Employer
As a mission-led technology company helping to drive better healthcare outcomes, Abacus Insights believes that the best innovation and value we can bring to our customers comes from diverse ideas, thoughts, experiences, and perspectives. Therefore, we dedicate resources to building diverse teams and providing equal employment opportunities to all applicants. Abacus prohibits discrimination and harassment regarding race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
At the heart of who we are is a commitment to continuously and intentionally building an inclusive culture—one that empowers every team member across the globe to do their best work and bring their authentic selves. We carry that same commitment into our hiring process, aiming to create an interview experience where you feel comfortable and confident showcasing your strengths. If there’s anything we can do to support that—big or small—please let us know.
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- SQL
- PySpark
- Databricks
- Snowflake
- AWS
- S3
- SQS
- AWS Lambda
- Airflow
- dbt
- Kafka
- Delta Lake
- Terraform
- CI/CD
- GitLab
- JSON
- Avro
Возможные вопросы на собеседовании
Проверка глубокого понимания работы Spark и умения оптимизировать производительность.
Расскажите о вашем опыте оптимизации PySpark приложений. Какие техники (например, Z-Ordering, partitioning, broadcast joins) вы использовали для устранения узких мест?
Оценка навыков проектирования архитектуры данных.
Как бы вы спроектировали масштабируемую систему обработки данных, которая должна поддерживать как пакетную, так и потоковую загрузку из различных API и S3?
Проверка опыта работы с современными облачными хранилищами.
В чем заключаются основные различия в подходах к оптимизации производительности и стоимости между Snowflake и Databricks, исходя из вашего опыта?
Оценка зрелости инженерных процессов кандидата.
Как вы организуете CI/CD и автоматизированное тестирование для дата-пайплайнов, чтобы обеспечить целостность и качество данных в продакшене?
Проверка лидерских качеств и умения работать в команде.
Опишите случай, когда вам нужно было объяснить сложное техническое решение нетехническим стейкхолдерам. Как вы адаптировали свою коммуникацию?
Похожие вакансии
Senior Associate, Data Platform Engineer (Snowflake)
Senior Data Analyst
Senior Data Engineer | Bees Personalization
Senior Tableau Engineer
Senior Tableau Engineer
Staff Product Analyst, Consumer Analytics (f/m/x)
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!