- Страна
- США
- Зарплата
- 76 194 $ – 122 981 $
Откликайтесь
на вакансии с ИИ

Consultant — AI Strategy, Governance & Security
Отличная позиция для старта карьеры в консалтинге нового поколения. Высокий потенциал роста за счет работы напрямую с руководителем практики и участия в создании методологии, а также конкурентная заработная плата и фокус на самых актуальных технологиях (Agentic AI, MCP).
Сложность вакансии
Роль требует редкого сочетания навыков: глубокого понимания архитектуры LLM и Python с одной стороны, и знаний в области комплаенса (ISO 42001, NIST) и бизнес-стратегии с другой. Высокая планка ответственности при работе с C-level руководством и необходимость создания инструментов с нуля повышают сложность.
Анализ зарплаты
Предложенный диапазон ($76k - $123k) полностью соответствует рыночным ожиданиям для позиции консультанта начального и среднего уровня в США, особенно в технологическом хабе Вирджинии. Нижняя граница подходит для кандидатов с минимальным опытом (2 года), в то время как верхняя отражает надбавку за дефицитные навыки в области ИИ-безопасности.
Сопроводительное письмо
I am writing to express my strong interest in the Consultant role for AI Strategy, Governance & Security. With a background that bridges technical AI development and cybersecurity, I am particularly drawn to your practice's focus on moving beyond experimentation to enterprise-scale, secure deployment. My experience with Python and LLM architectures, combined with a deep interest in frameworks like NIST AI RMF and the EU AI Act, aligns perfectly with your mission to help organizations govern AI responsibly.
In my previous work, I have consistently sought to translate complex technical risks into actionable business strategies. I am excited by the prospect of contributing to your proprietary governance frameworks and building proof-of-concept tools to demonstrate security concepts like prompt injection risks. I thrive in ambiguous, builder-oriented environments and am eager to bring my 'bridger' mindset to the Technology Enablement team at MorganFranklin Consulting.
Составьте идеальное письмо к вакансии с ИИ-агентом

Откликнитесь в morganfranklinconsultingllc уже сейчас
Станьте архитектором будущего ИИ-безопасности и постройте карьеру на стыке технологий и стратегии в MorganFranklin Consulting!
Описание вакансии
About the Role
We are building a differentiated advisory practice at the intersection of AI strategy, governance, and security. As organizations move from AI experimentation to enterprise-scale deployment, they face urgent questions: How do we govern AI responsibly? How do we secure agentic architectures? How do we align technical capabilities with business strategy and regulatory requirements?
This role is designed for a technically grounded early-career professional who wants to help answer those questions alongside senior practitioners. You will work directly with the practice lead on client-facing engagements, contribute to the development of proprietary frameworks and tools, and build expertise in one of the fastest-growing areas of consulting.
This is not a traditional strategy consulting role, and it is not a pure engineering role. It sits at the intersection—you need to be comfortable reading Python, understanding how LLMs and agentic systems work, AND translating that knowledge into governance frameworks, risk assessments, and executive-ready deliverables. If you thrive at the border between technical depth and business impact, this is for you.
What You Will Do
Client Delivery (50–60%)
- Support the design and delivery of AI governance assessments for clients across industries, leveraging frameworks such as ISO/IEC 42001, NIST AI RMF, and the EU AI Act.
- Conduct technical reviews of client AI systems, architectures, and deployments to identify governance gaps, security vulnerabilities, and risk exposure.
- Develop AI risk registers, control mappings, and maturity assessments tailored to each client’s organizational context and regulatory landscape.
- Prepare client-facing deliverables including assessment reports, executive briefings, implementation roadmaps, and policy recommendations.
- Participate in workshops and stakeholder interviews with client teams ranging from data scientists to C-suite executives.
Practice Development (25–30%)
- Help build and refine the practice’s proprietary AI governance and security framework, integrating industry standards with practical implementation experience.
- Research emerging AI risks, including those specific to agentic AI systems, LLM-based applications, MCP architectures, and AI supply chains.
- Develop reusable tools, templates, and accelerators (e.g., assessment questionnaires, control libraries, risk scoring models) to scale the practice.
- Contribute to thought leadership content: draft LinkedIn posts, white papers, blog articles, and conference presentation materials.
- Monitor regulatory and standards developments (EU AI Act, state-level AI legislation, ISO/IEC updates, OWASP LLM Top 10) and maintain a current knowledge base.
Technical Contribution (15–20%)
- Build proof-of-concept tools and demos using Python to illustrate governance and security concepts for clients (e.g., prompt injection demonstrations, model evaluation dashboards, automated compliance checks).
- Evaluate and test AI platforms, tools, and vendor solutions from a governance and security perspective.
- Support the practice lead’s technical fluency development by preparing technical briefings, annotated code walkthroughs, and “translation” materials that bridge technical and executive audiences.
- Stay hands-on with AI/ML development trends: experiment with agentic frameworks (LangChain, LangGraph, CrewAI), RAG architectures, and model evaluation techniques.
What We Are Looking For
Required Qualifications
- 2–3 years of professional experience in one or more of the following areas: AI/ML engineering, data science, or cybersecurity.
- Bachelor’s degree in Computer Science, Data Science, Information Systems, Cybersecurity, Engineering, or a related field. Master’s degree is a plus but not required.
- Working proficiency in Python and comfort navigating data science tooling (Jupyter, Pandas, Scikit-learn, or equivalent).
- Foundational understanding of machine learning concepts: supervised/unsupervised learning, model training and evaluation, overfitting, bias-variance tradeoff.
- Familiarity with LLM-based applications: understanding of how large language models work (at minimum: tokenization, embeddings, attention, fine-tuning vs. RAG vs. prompt engineering).
- Strong written and verbal communication skills—you must be able to explain technical concepts to non-technical stakeholders clearly and concisely.
- Comfort working in ambiguity: this is a practice being built, not a mature team with fully defined processes. You need to be self-directed and resourceful.
Preferred Qualifications
- Exposure to AI governance or risk management frameworks (ISO/IEC 42001, NIST AI RMF, EU AI Act, OWASP LLM Top 10, MITRE ATLAS).
- Experience with agentic AI frameworks (LangChain, LangGraph, CrewAI, AutoGen) or understanding of agent architectures and tool-use patterns.
- Understanding of MCP (Model Context Protocol) or similar protocols for AI system integration and the security implications thereof.
- Background in cybersecurity, including familiarity with security frameworks (NIST CSF, ISO 27001) and how they intersect with AI-specific risks.
- Experience in a consulting or professional services environment, including client-facing work and deliverable development.
- Relevant certifications such as: CISA, CISSP, CCSP, AWS/Azure AI certifications, or ISO 42001 Lead Implementer/Auditor.
- Bilingual English/Spanish is a strong plus.
The Kind of Person Who Thrives in This Role
We are not looking for someone who fits neatly into a single box. The ideal candidate is a “bridger”—someone who can move fluidly between a technical deep dive and an executive conversation. Specifically:
- You’re a builder, not just an analyst. When you see a gap in a process or a tool, your instinct is to prototype something—a script, a template, a dashboard, not just write a slide about it.
- You’re curious about “why” and “so what.” You don’t just want to understand how a transformer model works; you want to understand what that means for how organizations should govern and secure it.
- You write well. A significant portion of this role involves producing written work—reports, frameworks, articles—and clarity of writing is non-negotiable.
- You’re comfortable being the least experienced person in the room. You’ll be in meetings with CIOs, CISOs, and senior partners. You need the confidence to contribute and the humility to learn.
What We Offer
- Accelerated growth trajectory: You’ll be building a practice from the ground up alongside senior leadership, gaining exposure and responsibility that would take years to earn in a larger, more established team.
- Investment in your learning: Budget for certifications, training programs, and conference attendance.
- Client diversity: Work across industries and with organizations at different stages of AI maturity—from Fortune 500 companies to mid-market firms navigating their first AI initiatives.
Determining compensation for this role (and others) at Highspring depends upon a wide array of factors including but not limited to the individual’s skill sets, experience and training, licensure and certifications, office location and other geographic considerations, as well as other business and organizational needs. With that said, as required by local law, Highspring believes that the following salary range reasonably estimates the base compensation for an individual hired into this position in geographies that require salary range disclosure to be between the range below. The individual may also be eligible for a variety of bonus and financial incentives based on individual and company performance.
Base Compensation Range
$76,194—$122,981 USD
Создайте идеальное резюме с помощью ИИ-агента

Навыки
- Python
- Machine Learning
- Large Language Models
- Generative AI
- Cybersecurity
- Risk Management
- LangChain
- Data Science
- Jupyter
- Pandas
- Scikit-learn
- NIST AI RMF
- ISO 42001
- RAG
Возможные вопросы на собеседовании
Проверка технического понимания рисков современных ИИ-систем.
Можете ли вы объяснить механику атаки типа 'prompt injection' и предложить конкретные методы защиты на уровне архитектуры приложения?
Оценка знаний в области регуляторики и умения применять их на практике.
Как требования EU AI Act могут повлиять на жизненный цикл разработки ИИ-продукта в крупной финансовой организации?
Проверка навыков 'переводчика' между технарями и бизнесом.
Как бы вы объяснили CISO (директору по информационной безопасности), почему стандартных мер кибербезопасности недостаточно для защиты систем на базе агентов (agentic AI)?
Оценка опыта работы с фреймворками.
Какие ключевые различия вы видите между NIST AI RMF и ISO/IEC 42001, и в каких случаях вы бы рекомендовали один вместо другого?
Проверка проактивности и навыков разработчика.
Опишите случай, когда вы обнаружили пробел в процессе или инструменте и самостоятельно создали прототип (скрипт или дашборд) для его решения.
Похожие вакансии
MLOps Engineer (Python)
AI Engineer (CV & Navigation)
Middle, Middle+, Senior GenAI/LLM Разработчик
Middle / Senior GenAI Engineer (CV)
AI Engineer / AI Mentor
AI-специалист
1000+ офферов получено
Устали искать работу? Мы найдём её за вас
Quick Offer улучшит ваше резюме, подберёт лучшие вакансии и откликнется за вас. Результат — в 3 раза больше приглашений на собеседования и никакой рутины!
- Страна
- США
- Зарплата
- 76 194 $ – 122 981 $