yandex
blackforestlabs
Страна
Германия
+500% приглашений

Откликайтесь
на вакансии с ИИ

Ускорим процесс поиска работы
В офисеПолная занятость

Member of Technical Staff - Image / Video Generation

Оценка ИИ

Это уникальная возможность работать в одной из самых влиятельных AI-лабораторий мира (создатели Stable Diffusion). Вакансия предлагает работу над передовыми технологиями (FLUX) в среде с сильной инженерной культурой и открытой наукой.


Вакансия из Quick Offer Global, списка международных компаний
Пожаловаться

Сложность вакансии

ЛегкоСложно
Оценка ИИ

Роль требует глубоких экспертных знаний в области диффузионных моделей, архитектуры трансформеров и распределенного обучения на огромных масштабах. Кандидат должен обладать навыками исследователя и инженера одновременно, умея проводить сложные абляционные исследования.

Анализ зарплаты

Медиана120 000 €
Рынок95 000 € – 160 000 €
Оценка ИИ

Зарплата для этой роли в Германии для специалистов такого уровня обычно начинается от 90,000€ и может значительно превышать 150,000€ с учетом бонусов и опционов, учитывая высокую конкуренцию за таланты в сфере Generative AI. Данная позиция сопоставима с уровнями Senior/Staff в крупных тех-гигантах.

Сопроводительное письмо

I am writing to express my strong interest in the Member of Technical Staff position at Black Forest Labs. Having followed the groundbreaking work your team has done with Latent Diffusion and the FLUX models, I am inspired by your commitment to research excellence and open science. My background in training large-scale diffusion models and conducting rigorous ablation studies aligns perfectly with your mission to push the boundaries of image and video generation.

In my previous experience, I have focused on the intersection of architectural efficiency and output quality, specifically working with PyTorch and distributed training frameworks like FSDP. I thrive in environments where empirical evidence takes precedence over intuition, and I am eager to bring my expertise in fine-tuning and model evaluation to help Black Forest Labs solve the complex trade-offs inherent in billion-parameter scale models.

I am particularly excited about the opportunity to work in Freiburg and contribute to a team that values deep technical rigor. I look forward to the possibility of discussing how my technical skills and passion for generative AI can contribute to your next generation of foundational models.

+250% к просмотрам

Составьте идеальное письмо к вакансии с ИИ-агентом

Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в blackforestlabs уже сейчас

Присоединяйтесь к команде создателей Stable Diffusion и FLUX, чтобы определять будущее генеративного ИИ!

Описание вакансии

We're the team behind Latent Diffusion, Stable Diffusion, and FLUX—foundational technologies that changed how the world creates images and video. We’re creating the generative models that power how people make images and video—tools used by millions of creators, developers, and businesses worldwide. Our FLUX models are among the most advanced in the world, and we're just getting started.

Headquartered in Freiburg, Germany with a growing presence in San Francisco, we're scaling fast while staying true to what makes us different: research excellence, open science, and building technology that expands human creativity.

What You'll Work On

You'll train large-scale diffusion models for image and video generation, exploring new approaches while maintaining the rigor that helps us distinguish meaningful progress from incremental tweaks. This isn't about following established recipes—it's about running the experiments that clarify which architectural choices matter and which are less impactful.

**You'll be the person who:**

  • Trains large-scale diffusion transformer models for image and video data, working at the scale where intuitions break and empirical evidence matters
  • Rigorously ablates design choices—running experiments that isolate variables, control for confounds, and produce insights you can actually trust—then communicating those results to shape our research direction
  • Reasons about the speed-quality tradeoffs of neural network architectures in production settings where both constraints matter simultaneously
  • Fine-tunes diffusion models for specialized applications like image and video upscalers, inpainting/outpainting models, and other tasks where general-purpose models aren't enough

Questions We're Wrestling With

  • Which architectural choices actually matter for image and video quality, and which are just expensive distractions?
  • How do you design ablation studies that isolate the signal from the noise at billion-parameter scale?
  • What are the real speed-quality tradeoffs for different architectures—and how do they change with scale?
  • When does fine-tuning a foundation model work better than training from scratch, and why?
  • How do you evaluate generative models in ways that correlate with what users actually care about?
  • Which training techniques (FSDP configurations, precision strategies, parallelism approaches) matter for model quality versus just training speed?

These aren't solved problems—they're questions we're actively figuring out through rigorous experimentation.

What we are looking for

You've trained large-scale diffusion models and developed strong intuitions about what matters. You know that at research scale, every design choice has tradeoffs, and the only way to know which ones are worth making is through careful ablation. You're comfortable debugging distributed training issues and presenting research findings to the team.

**You likely have:**

  • Hands-on experience training large-scale diffusion models for image and video data, with practical knowledge of common failure modes and what matters most in training
  • Experience fine-tuning diffusion models for specialized applications—upscalers, inpainting, outpainting, or other tasks where understanding the domain matters as much as understanding the architecture
  • Deep understanding of how to effectively evaluate image and video generative models—knowing which metrics correlate with quality and which are just convenient proxies
  • Strong proficiency in PyTorch, transformer architectures, and the full ecosystem of modern deep learning
  • Solid understanding of distributed training techniques—FSDP, low precision training, model parallelism—because our models don't fit on one GPU and training decisions impact research outcomes

**We'd be especially excited if you:**

  • Have experience writing forward and backward Triton kernels and ensuring their correctness while considering floating point errors
  • Bring proficiency with profiling, debugging, and optimizing single and multi-GPU operations using tools like Nsight or stack trace viewers
  • Know the performance characteristics of different architectural choices at scale
  • Have published research that contributed to how people think about generative models

What We're Building Toward

We're not just training models—we're working to better understand what matters in generative AI through rigorous experimentation. Each ablation study helps uncover assumptions we didn't know we were making. Each architecture decision teaches us more about the tradeoffs that matter. Each training run at scale adds insights that don't show up at smaller scales. If that sounds more compelling than following established approaches, we should talk.

+400% к собеседованиям

Создайте идеальное резюме с помощью ИИ-агента

Создайте идеальное резюме с помощью ИИ-агента

Навыки

  • PyTorch
  • Computer Vision
  • Diffusion Models
  • Deep Learning
  • Generative AI
  • FSDP
  • Distributed Training
  • Transformer
  • Triton
  • Nvidia Nsight

Возможные вопросы на собеседовании

Проверка понимания фундаментальных основ и практического опыта работы с архитектурами, на которых строятся модели компании.

Какие архитектурные изменения в Diffusion Transformers (DiT) вы считаете наиболее критичными для масштабирования видеогенерации по сравнению с изображениями?

Работа в Black Forest Labs предполагает обучение моделей, не помещающихся на одну GPU. Важно понимание инструментов оптимизации.

Опишите ваш опыт отладки проблем сходимости при использовании FSDP или стратегий смешанной точности (FP8/BF16).

Компания ценит научный подход и умение отличать реальный прогресс от случайных колебаний.

Как бы вы спроектировали серию абляционных тестов, чтобы изолировать влияние функции потерь от влияния качества датасета при обучении модели на 1B+ параметров?

Оценка генеративных моделей — одна из ключевых проблем, упомянутых в вакансии.

Помимо FID и CLIP score, какие метрики или методы человеческой оценки вы считаете наиболее репрезентативными для оценки временной согласованности в видео?

Бонусный навык, указанный в вакансии, критичный для оптимизации производительности.

Сталкивались ли вы с ошибками точности при написании кастомных Triton-кернелов и как вы обеспечивали их корректность?

Похожие вакансии

более 1000 офферов получено
4.9

1000+ офферов получено

Устали искать работу? Мы найдём её за вас

Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!

blackforestlabs
Страна
Германия