We are looking for a Senior Software Engineer to build the application layer and tooling ecosystem that powers our GenAI security capabilities.
In this role, you are primarily a builder of systems and applications. You will work closely with other Software Engineers to bridge the gap between AI Research and Engineering by turning experimental concepts into production-grade software. Your scope covers three critical pillars:
- Core Application Development: Building the Vulcan Platform (Full Stack) and internal application logic.
- AI Agent Engineering: "Programming" models via advanced Prompt Engineering and workflow orchestration.
- Tooling & LLMOps: Creating the operational pipelines and infrastructure tooling that serve these applications.
You will not just integrate APIs; you will architect the surrounding software ecosystem that makes our GenAI Red Teaming systems scalable, reliable, and user-friendly.
Learn more about us 👉
-
How to apply
It will help us process your applications faster through https://job-boards.greenhouse.io/aift/jobs/5744136004
*Please apply with English CV, thank you.
-
What you'll do
1. Core Application & Platform Development (Full Stack)
- Vulcan Platform: Partner with Product and Design to build intuitive frontend interfaces (React, Next.js) for dashboards, configuration consoles, and visualization tools.
- Backend Services & APIs: Develop and maintain the essential APIs (FastAPI/Python) and microservices that power the Vulcan platform.
- Internal Tooling Ecosystem: Build frontend and backend tools that accelerate internal teams (Project, ML/AI), including configuration panels, data visualization pipelines, and evaluation interfaces.
- Guardrails & Security Features: Implement backend services for AI guardrails (content moderation, prompt filtering) and automated adversarial testing pipelines.
2. GenAI Agent & Logic Engineering
- Prompt Engineering as Code: Treat prompts as software logic. Lead the design and implementation of AI Agent behaviors, optimizing responses through structured Prompt Engineering techniques.
- Agent Workflow Design: Orchestrate complex, multi-step LLM workflows where the "application logic" involves chaining model interactions effectively.
- Response Handling: Design robust parsing and validation mechanisms to ensure raw model outputs are converted into structured, usable application data.
3. Tooling & LLMOps Infrastructure
- Tooling Infrastructure: Build and maintain the underlying tools and services that support the AI lifecycle, ensuring seamless integration between development, testing, and production environments.
- LLMOps Pipelines: Establish pipelines for evaluation, deployment, and monitoring to ensure model reliability and consistent performance.
- Asynchronous Processing: Architect robust task execution systems using Message Queues (Celery, RabbitMQ) to handle long-running asynchronous AI inference tasks.
- Observability: Implement logging and tracing (e.g., Langfuse, MLflow) to track system health, latency, and costs within the application layer.
-