Principal Applied Scientist
Location: Redmond
Posted on: June 23, 2025
|
|
Job Description:
Security represents the most critical priorities for our
customers in a world awash in digital threats, regulatory scrutiny,
and estate complexity. Microsoft Security aspires to make the world
a safer place for all. We want to reshape security and empower
every user, customer, and developer with a security cloud that
protects them with end to end, simplified solutions. The Microsoft
Security organization accelerates Microsoft’s mission and bold
ambitions to ensure that our company and industry is securing
digital technology platforms, devices, and clouds in our customers’
heterogeneous environments, as well as ensuring the security of our
own internal estate. Our culture is centered on embracing a growth
mindset, a theme of inspiring excellence, and encouraging teams and
leaders to bring their best each day. In doing so, we create
life-changing innovations that impact billions of lives around the
world. The Microsoft Security AI Research team develops advanced
AI-driven security solutions to protect Microsoft and its
customers. Our team combines expertise in large-scale AI, knowledge
graphs, and generative models to address evolving security
challenges across Microsoft’s complex digital environment.
Defending Microsoft’s complex environment provides a unique
opportunity to build and evaluate autonomous defense and offense
through emerging generative AI capabilities. By leveraging rich
security telemetry and operational insights from Microsoft’s Threat
Intelligence Center and Red Team, you will have access to a
one-of-a-kind environment for innovation at scale. As a Principal
Applied Scientist , you will focus on applying advanced graph
algorithms and large language models (LLMs) to automate and enhance
red-teaming operations. Deep expertise in both graph theory/graph
machine learning and large language models is essential for this
role. You will be responsible for designing and building AI systems
that combine knowledge graphs and LLMs for adversarial simulation,
attack path discovery, and threat modeling in a production
environment. While cybersecurity experience is preferred, it is not
required. Microsoft’s mission is to empower every person and every
organization on the planet to achieve more. As employees we come
together with a growth mindset, innovate to empower others, and
collaborate to realize our shared goals. Each day we build on our
values of respect, integrity, and accountability to create a
culture of inclusion where everyone can thrive at work and beyond.
Qualifications Required/Minimum Qualifications: Bachelors Degree in
Statistics, Econometrics, Computer Science, Electrical or Computer
Engineering, or related field AND 6 years related experience (e.g.,
statistics, predictive analytics, research) OR Masters Degree in
Statistics, Econometrics, Computer Science, Electrical or Computer
Engineering, or related field AND 4 years related experience (e.g.,
statistics, predictive analytics, research) OR Doctorate in
Statistics, Econometrics, Computer Science, Electrical or Computer
Engineering, or related field AND 3 years related experience (e.g.,
statistics, predictive analytics, research) OR equivalent
experience. 8 years of professional experience in software
development and applied machine learning, including building and
deploying production-quality systems. 3 years of hands-on
experience with large language models (LLMs), such as prompt
engineering, fine-tuning, or developing and deploying LLM-based
applications in production. 3 years of hands-on experience with
graph theory, graph algorithms, and graph machine learning,
including practical work with large-scale graph data in real-world
environments. Experience with building, scaling, and deploying
graph-based solutions and/or multi-agent frameworks (e.g., AutoGen,
LangGraph, crewAI) in cloud environments. Ability to translate
advanced graph and LLM research into production-grade software that
delivers measurable business or security impact at scale. Other
Requirements: Ability to meet Microsoft, customer and/or government
security screening requirements are required for this role. These
requirements include, but are not limited to the following
specialized security screenings: Microsoft Cloud Background Check:
This position will be required to pass the Microsoft background and
Microsoft Cloud background check upon hire/transfer and every two
years thereafter. Additional or Preferred Qualifications:
Proficiency in Python is required, with significant experience
developing robust, production-grade AI/ML systems using
object-oriented programming. Ph.D. in Computer Science, Machine
Learning, Mathematics, or a related field. Experience in
cybersecurity domains such as red teaming, adversary emulation, or
threat intelligence. Experience combining LLMs with knowledge
graphs or graph-based data. Experience with transformer-based
models and their application to graph or security data. Familiarity
with MLOps, scalable data pipelines, and deploying research in
production environments. Experience working with large-scale,
heterogeneous datasets and graph-based security telemetry. Strong
written and verbal communication skills; ability to present complex
technical concepts clearly. Contributions to open-source projects
or publications related to graph learning, LLMs, or security.
Experience integrating LLMs with knowledge graphs to build
high-fidelity adversarial models, enabling more advanced attack
simulation and security automation. Applied Sciences IC5 - The
typical base pay range for this role across the U.S. is USD
$139,900 - $274,800 per year. There is a different range applicable
to specific work locations, within the San Francisco Bay area and
New York City metropolitan area, and the base pay range for this
role in those locations is USD $188,000 - $304,200 per year.
Microsoft will accept applications for the role until June 20,
2025. Responsibilities: Research, design, and develop advanced
graph-based and LLM-powered AI systems to automate red-teaming and
adversarial simulation. Build and maintain large-scale knowledge
graphs and leverage LLMs for representing, reasoning about, and
simulating attack paths, threat relationships, and mitigation
strategies within Microsoft’s cloud and enterprise environments.
Apply state-of-the-art graph algorithms, graph neural networks, and
LLM techniques to real-world security data. Collaborate with
security researchers, applied scientists, and engineers to design
autonomous agents and multi-agent frameworks for security testing
and incident response. Integrate data and insights from Microsoft’s
Threat Intelligence Center, Red Team, and security telemetry to
inform graph and LLM modeling and simulation. Contribute to
research prototypes and their operationalization in production
systems, with a focus on scalability and robustness. Although this
is an individual contributor (IC) role, the Principal Applied
Scientist is expected to provide technical leadership, mentor and
support staff on technical aspects, and foster a collaborative,
team-oriented environment. Develop and deploy state-of-the-art
graph AI models to enhance red teaming automation. Embody our
culture and values
Keywords: , Everett , Principal Applied Scientist, IT / Software / Systems , Redmond, Washington