Software Engineer II - Security for AI
Microsoft
Software Engineer II - Security for AI
Redmond, Washington, United States
Save
Overview
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
The Microsoft Security AI Engineering team is responsible for developing industry-leading, AI-driven security solutions to safeguard Microsoft and its customers. Our team brings together deep expertise in large-scale artificial intelligence, autonomous agents, and generative models to address emerging security threats in Microsoft’s complex and rapidly evolving digital environment. Defending one of the world’s most diverse enterprise environments offers an unparalleled opportunity to build, deploy, and assess autonomous red teaming and defense capabilities using cutting-edge AI techniques. By leveraging extensive security telemetry and operational insights from Microsoft’s Threat Intelligence Center and Red Team, team members have access to an exceptional environment for large-scale innovation, experimentation, and real-world impact. As a Software Engineer II - Security for AI specializing in Red Team AI Agents, you will focus on designing, building, and delivering advanced software features that leverage large language models (LLMs) and autonomous agents to automate and enhance red teaming operations and security automation. This role will involve building practical, production-grade security capabilities—including code analysis, agent-based adversarial simulation, and automated threat modeling—to help Microsoft and its customers stay ahead of evolving cyber threats. Cybersecurity experience is required for this role.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Qualifications
Required Qualifications:
- Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, AND 2+ years of experience coding in Python
- OR equivalent practical experience.
- Proficiency in Python, including experience applying object-oriented programming (OOP) concepts to design, develop, and maintain software systems.
- Experience with prompt engineering or developing applications utilizing large language models (LLMs), including designing, testing, and integrating prompts for LLM-based systems in production or research settings.
- Experience using version control systems (e.g., Git), participating in code reviews, and applying software engineering best practices such as testing, debugging, and familiarity with CI/CD workflows.
Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Software Engineering IC3 - The typical base pay range for this role across the U.S. is USD $100,600 - $199,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $131,400 - $215,400 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications for the role until August 27, 2025.
#MSFTSecurity #MSECAI #SecurityAI #RedTeam #Cybersecurity #AIAgents #MicrosoftSecurity #LLMs
Responsibilities
- Design, develop, test, and maintain software features for security products that utilize LLM-based functionality, including prompt engineering and integration of LLM workflows (e.g., Security Copilot skills, red team automation).
- Build, maintain, and leverage knowledge graphs, attack graphs, or similar graph-based structures to enable advanced security applications such as threat modeling, adversary simulation, and automated response.
- Develop and enhance autonomous agents to support red teaming, security testing, and incident simulation in complex environments.
- Integrate security telemetry and diverse data sources into downstream security applications and graph-based models to improve detection, situational awareness, and operational efficiency.
- Collaborate with engineers, security researchers, and cross-functional team members to deliver high-quality product features, drive innovation, and meet project goals.