- Страна
- США
Откликайтесь
на вакансии с ИИ

Senior Data Engineer
Отличное предложение от стабильного финтех-единорога с крупным финансированием. Полная удаленка, работа с современным стеком (GCP, Iceberg, Trino) и конкурентный пакет (опционы, стипендии) делают вакансию очень привлекательной.
Сложность вакансии
Роль требует глубоких знаний в распределенных системах (Trino, Kafka) и облачной инфраструктуре GCP. Высокая планка по опыту (7+ лет) и необходимость работы с огромными объемами данных (>100 млн событий в день) делают позицию сложной.
Анализ зарплаты
Зарплата в объявлении не указана, но для Senior Data Engineer в США/LATAM на удаленке рыночный диапазон составляет $140k-$190k. Alpaca, будучи компанией после серии D, обычно предлагает конкурентные условия на уровне верхнего дециля рынка.
Сопроводительное письмо
I am writing to express my strong interest in the Senior Data Engineer position at Alpaca. With over 7 years of experience in data engineering and a proven track record of building scalable platforms that handle hundreds of millions of events daily, I am confident in my ability to contribute to your mission of opening financial services to everyone. My expertise in GCP, Apache Iceberg, and Trino aligns perfectly with your current tech stack and your vision for a robust Data Lakehouse architecture.
In my previous roles, I have successfully implemented complex ETL/ELT patterns and managed large-scale data transformations using dbt and Airflow. I am particularly drawn to Alpaca's commitment to open-source solutions and your distributed-first culture. I am eager to bring my experience in low-latency data processing and infrastructure-as-code to help Alpaca scale its data management layer as you expand into new jurisdictions and onboard larger institutional customers.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в alpaca уже сейчас
Присоединяйтесь к команде Alpaca и стройте будущее финтех-инфраструктуры мирового масштаба!
Описание вакансии
Who We Are:
Alpaca is a US-headquartered self-clearing broker-dealer and brokerage infrastructure for stocks, ETFs, options, crypto, fixed income, 24/5 trading, and more. Our recent Series D funding round brought our total investment to over $320 million, fueling our ambitious vision.
Amongst our subsidiaries, Alpaca is a licensed financial services company, serving hundreds of financial institutions across 40 countries with our institutional-grade APIs. This includes broker-dealers, investment advisors, wealth managers, hedge funds, and crypto exchanges, totalling over 9 million brokerage accounts.
Our global team is a diverse group of experienced engineers, traders, and brokerage professionals who are working to achieve our mission of opening financial services to everyone on the planet. We're deeply committed to open-source contributions and fostering a vibrant community, continuously enhancing our award-winning, developer-friendly API and the robust infrastructure behind it.
Alpaca is proudly backed by top-tier global investors, including Portage Ventures, Spark Capital, Tribe Capital, Social Leverage, Horizons Ventures, Unbound, SBI Group, Derayah Financial, Elefund, and Y Combinator.
Our Team Members:
We're a dynamic team of 230+ globally distributed members who thrive working from our favorite places around the world, with teammates spanning the USA, Canada, Japan, Hungary, Nigeria, Brazil, the UK, and beyond!
We're searching for passionate individuals eager to contribute to Alpaca's rapid growth. If you align with our core values—Stay Curious, Have Empathy, and Be Accountable—and are ready to make a significant impact, we encourage you to apply.
Your Role: We are seeking a Senior Data Platform Engineer to design and develop the data management layer for our platform to ensure its scalability as we expand to larger customers and new jurisdictions. At Alpaca, data engineering encompasses financial transactions, customer data, API logs, system metrics, augmented data, and third-party systems that impact decision-making for both internal and external users. We process hundreds of millions of events daily, with this number growing as we onboard new customers.
We prioritize open-source solutions in our data management approach, leveraging a Google Cloud Platform (GCP) foundation for our data infrastructure. This includes batch/stream ingestion, transformation, and consumption layers for BI, internal use, and external third-party sinks. Additionally, we oversee data experimentation, cataloging, and monitoring and alerting systems.
Our team is 100% distributed and remote.
Responsibilities:
- Design and oversee key forward- and reverse-ETL patterns to deliver data to relevant stakeholders.
- Develop scalable patterns in the transformation layer to ensure repeatable integrations with BI tools across various business verticals.
- Expand and maintain the Alpaca Data Lakehouse architecture's constantly evolving elements.
- Collaborate closely with sales, marketing, product, and operations teams to address key data flow needs.
- Operate the system and manage production issues in a timely manner.
Must-Haves:
- 7+ years of experience in data engineering, including 2+ years of building scalable, low-latency data platforms capable of handling >100M events/day.
- Proficiency in at least one programming language, with strong working knowledge of Python and SQL.
- Experience with cloud-native technologies like Docker, Kubernetes, and Helm.
- Strong hands-on experience with relational database systems and object storage implementations like Apache Iceberg.
- Strong hands-on experience with Google Cloud Platform and its various data-related services (Composer, Dataproc, Datastream, etc.)
- Experience in building scalable transformation layers, preferably through formalized SQL models (e.g., dbt).
- Ability to work in a fast-paced environment and adapt solutions to changing business needs.
- Experience with ETL orchestrators / frameworks like Apache Airflow and Airbyte.
- Production experience with streaming systems like Kafka.
- Exposure to infrastructure, DevOps, and Infrastructure as Code (IaaC), like Terraform.
- Deep knowledge of distributed systems, storage, transactions, and query processing utilizing open-source distributed query engines like Trino (formerly PrestoSQL).
- If you're passionate about data engineering and thrive in a dynamic startup environment, we'd love to hear from you!
How We Take Care of You:
- Competitive Salary & Stock Options
- Health Benefits
- New Hire Home-Office Setup: One-time USD $500
- Monthly Stipend: USD $150 per month via a Brex Card
Alpaca is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse workforce.
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- SQL
- Docker
- Kubernetes
- Helm
- Apache Iceberg
- Google Cloud Platform
- dbt
- Apache Airflow
- Airbyte
- Kafka
- Terraform
- Trino
- ETL
- Data Lakehouse
Возможные вопросы на собеседовании
Проверка опыта работы с высоконагруженными системами, заявленного в требованиях.
Расскажите о самом сложном кейсе масштабирования пайплайна, обрабатывающего более 100 млн событий в день. С какими узкими местами вы столкнулись?
Alpaca использует Iceberg и Trino; важно понять владение этими инструментами.
Какие преимущества дает использование Apache Iceberg в связке с Trino для построения Data Lakehouse по сравнению с традиционными хранилищами?
Позиция требует навыков DevOps и IaaC.
Как вы организуете процесс CI/CD для инфраструктуры данных с использованием Terraform и Kubernetes?
Работа в финтехе подразумевает высокую точность данных.
Как вы обеспечиваете консистентность данных и обработку ошибок в распределенных стриминговых системах на базе Kafka?
Проверка навыков проектирования архитектуры.
Опишите ваш подход к проектированию слоя трансформации данных (dbt) для обеспечения повторяемости и удобства использования в BI-инструментах.
Похожие вакансии
Data аналитик Senior
Data аналитик (Senior)
Data Analyst
Data/Product Analyst (Middle / Senior)
Middle/Senior Data/Product Analyst
Power BI Developer (Senior)
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США