Tìm kiếm việc làm online mới nhất!

Mở rộng tìm kiếm
Tắt
Trung cấp
開發與實作 生成式 AI 與影像辨識模型,用於產品瑕疵生成(Stable Diffusion、ControlNet、GAN 及其變體)、資料擴增與品質檢測。應用與微調 影像辨識/表徵學習模型(如 YOLO、CLIP、DINOv3 等新型開源模型)。進行 大型模型(ViT、LLM)Fine-tuning、模型建置、調校與效能優化。建立 資料清理、特徵工程、模型訓練與評估流程,持續提升模型穩定性與準確度。規劃並實作 RAG(檢索增強生成)架構,整合向量資料庫與 LLM。開發與維護 AI 模型服務 API,並與 WPF 前端及其他系統整合。進行 AI 技術與學術文獻研究,將研究成果應用於實務專案。使用 Docker、Git 進行部署與版本控管,配合主管交辦事項與團隊合作。依專案需求配合國外出差(一年約 1--3 次,每次約 3--4 週),參與專案導入、系統部署、模型調校或跨部門協作。其他主管交辦事項,並與團隊密切合作完成專案目標。必備條件3 年以上 AI/機器學習/深度學習 相關工作經驗,熟悉python、C#。熟悉 TensorFlow、PyTorch、Keras 等深度學習框架。熟悉 影像辨識模型與演算法,具 YOLO、CLIP、DINO(含新版本)等實作或應用經驗。具備 生成式模型(GAN 及其變體、Stable Diffusion、ControlNet)實務經驗,並理解模型底層原理。熟悉 Hugging Face 生態系(Transformers、PEFT、Datasets、Tokenizers)。有使用 PEFT 技術經驗(如 LoRA、QLoRA、Prefix Tuning 等),能在有限 GPU 資源下進行有效訓練與調校。了解 Fine-tuning 對模型效能、泛化能力與推論成本的影響,並能依量產需求進行權衡與最佳化。具備 RAG 架構、Prompt Engineering、向量搜尋與任一向量資料庫 經驗。具備 API 開發經驗,熟悉 Docker、Git。具備 數據清理、特徵工程、模型評估與效能優化能力。具良好溝通能力,可獨立作業並進行團隊合作。加分條件曾參與 產品瑕疵檢測、工業影像或製造業 AI 專案。具 影像生成+影像辨識混合應用 經驗(如生成資料輔助訓練)。具 論文閱讀與模型重現(Reproduction)經驗。具 WPF 前端開發 或 AI 系統整合經驗。曾參與 AI 相關研究、技術分享或開源專案。具 對比學習(Contrastive Learning) 應用經驗(如 Self-Supervised Learning、Representation Learning)。有使用 StableDiffusion WebUI、ComfyUI經驗。
Negotiable
Yêu cầu 5 năm kinh nghiệm
Không yêu cầu kinh nghiệm quản lý
This role requires you to work in a shift pattern or non-standard work hours as required. This may include weekend work.Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Pune, Maharashtra, India.Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related technical field, or equivalent practical experience. 5 years of experience in a technical role such as Technical Support, Software Engineering, or Solutions Engineering. Experience coding in one or more general purpose languages (e.g., Python, Java, Go, C or C++) including data structures, algorithms, and software design. Experience with Artificial Intelligence (AI) concepts and Machine Learning (ML) techniques. Experience with computer networking (e.g., TCP/IP, DNS, Load Balancing, routing) and Linux/Unix system administration. Preferred qualifications: Professional-level certification on Google Cloud, such as the Professional Machine Learning Engineer or Professional Cloud Architect. Experience with Google Cloud's AI/ML product portfolio, including Vertex AI (Vertex AI Workbench, Pipelines, Endpoints, TensorBoard) and Generative AI tools (Gemini, Gen AI Studio). Experience in specialized ML areas like Natural Language Processing (NLP), Computer Vision, or Recommendation System. Experience with public cloud infrastructure and core services (e.g., Compute Engine, Cloud Storage,BigQuery). Knowledge of ML frameworks such as TensorFlow, Keras, or PyTorch. Ability to lead the design and implementation of AI-based solutions or debugging tools, demonstrating strong collaborating skills. About the jobThe Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. Our Technical Solutions Engineers lead and own our large and important customer issues in addition to providing level two support to our other support teams. You will be a part of a global team that provides 24x7 support to help customers seamlessly make the switch to Google Cloud.In this role, you will troubleshoot technical problems for customers with a mix of debugging, networking, system administration, updating documentation, and when needed, coding/scripting. You will make our products easier to adopt and use by making improvements to the product, tools, processes and documentation. Our Technical Solutions team is driven by customers and you will help drive the success of Google Cloud by understanding and advocating for our customers’ issues. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.Responsibilities Troubleshoot and resolve highly technical issues across the Google Cloud AI/ML portfolio, focusing on customer-reported , deployment failures, model performance degradation and infrastructure-related problems. Work directly with customers on their ML deployments (including Generative AI models)to ensure production readiness,high availability. Utilize coding and scripting skills (primarily Python) to read,debug, and reproduce customer issues within their ML models (TensorFlow, PyTorch) or deployment environments(Kubernetes, Compute Engine). Manage customer problems through effective diagnosis,clear documentation and the development/implementation of new investigation tools to increase diagnostic speed. Develop an in-depth understanding of Google Cloud's AI/ML solutions and share this knowledge to upskill the wider global support organization. Participate in an on-call rotation, may include working non-standard hours,nights,or weekends as part of our global 24/7 support model. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Fullstack in Computer Vision and NLPSupport building, testing, and deploying machine learning models and algorithms to enhance user experience.Build the prompting and model orchestration for a production application backed by a language modelDesign and build agent affordances that unlock new capabilities for internal use and deployed productsDesign and build a novel eval that measures how many agents interact in groups to solve problemsAssist with the development of AI-powered assistant bots that automate workflows.Mô tả công việcDevelop and test AI-driven features in collaboration with the AI engineering team,Support the deployment and maintenance of machine learning models and ensure their effectiveness in production environments.Continuously monitor AI systems and suggest improvements based on user feedback and system performance.Work in an agile environment, participating in sprint planning, development, and testing cycles.Yêu cầu công việcBachelor’s degree in Computer Science, Engineering, Mathematics, Data Science, or a related field.Basic understanding of AI/ML concepts and experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.Have experience developing complex agentic systems using LLMsHave spent time prompting and/or building products with language modelsHave good communication skills and an interest in working with other researchers on difficult tasksExperience building LLM-based agents with frameworks like LangChain, LangGraph, monitoring tools such as LangSmithStrong understanding of agent system design, including orchestration, context/memory management, tool usage, and multi-step reasoningExperience building and operating agent lifecycle systems: job scheduling, execution pipelines, state management, and failure/retry handlingUnderstand stateless vs stateful agent architectures and their trade-offs (scalability, latency, memory persistence)Familiarity with data manipulation and analysis tools (e.g., Pandas, NumPy).Familiarity with cloud platforms (AWS, GCP, Azure) is a plus.Strong analytical and problem-solving skills, with a willingness to learn and adapt to new technologies and challenges.Good communication skills and the ability to work collaboratively within a teamExperience with project about Computer vision and NLP
MoMo is Vietnam’s leading financial super-app, redefining how millions manage their money through AI-driven innovation. Our Big Data AI Team doesn’t just support the product—we are the product. From hyper-personalization and eKYC to fraud detection, AI is the heartbeat of MoMo.As a Senior AI Engineer, you will lead the evolution of our Generative AI ecosystem. You won’t just be prompting LLMs; you’ll be architecting sophisticated Agentic workflows and productionizing state-of-the-art models that serve millions of users in real-time.Mô tả công việcLead Architecture: Design and deploy scalable Agentic frameworks and Generative AI solutions that integrate seamlessly into the MoMo ecosystem;Production-Grade AI: Build and maintain robust LLM-based products using SOTA techniques (RAG, fine-tuning, and prompt orchestration) and open-source libraries;Cross-Functional Leadership: Partner with Data Scientists, Backend Engineers, and Product Managers to bridge the gap between experimental models and high-availability production systems;Engineering Excellence: Write clean, high-performance production code and establish best practices for LLM Ops (CI/CD for models, versioning, and monitoring);Evaluation Optimization: Define rigorous evaluation metrics (faithfulness, relevancy, latency) and conduct A/B experiments to iterate on model performance;Scale: Optimize AI systems to handle high-concurrency traffic while maintaining low latency.Yêu cầu công việcExperience: 3+ years in professional software/AI engineering, with a deep focus on Voice Agentic and Generative AI;LLM Mastery: Hands-on experience with model families like Llama 3, Qwen 2, GPT-4, and BERT. You understand their architectures, tokenization nuances, and limitations;System Design: Proven track record of building Agentic systems or Voice-AI from scratch—taking them from preprocessing to real-world drift monitoring;Tech Stack: Mastery of Python and deep learning frameworks (PyTorch or TensorFlow). Experience with vector databases (Milvus, Pinecone, or similar) is a plus;Mindset: A product-oriented engineer who cares about the "Why" as much as the "How."
MoMo is the market leader in mobile payments in Vietnam, striving to make all transactions fast, easy, and joyful. You will join our Big Data AI team, where we position AI/Machine Learning as the core component of almost every product feature.Specifically, you will operate as a key technical leader in the Moni team—the squad behind MoMo's flagship AI Assistant. Moni currently serves hundreds of thousands of Monthly Active Users, scaling from a chatbot into a fully autonomous AI Agent. As a Senior / Technical Lead, you will drive the architectural decisions and engineering standards that power the next generation of our Agentic AI.Mô tả công việcTechnical Leadership Architecture:Define the technical vision and architecture for autonomous AI Agents.Make critical decisions on tech stacks, model selection, and system design to ensure scalability and reliability.Architect Build AI Agents: Lead the end-to-end development of complex Agentic workflows (Tool Calling, Planning, Reasoning) that integrate deep into the MoMo ecosystem.Multi-Agent Orchestration: Design and implement orchestration layers where multiple specialized agents collaborate to solve intricate user financial tasks.Advanced RAG Strategy: Engineer robust RAG pipelines (Hybrid Search, GraphRAG, Re-ranking) to handle vast knowledge bases with high precision.System Evaluation Quality Assurance: Establish "Gold Standard" evaluation frameworks for Agentic AI (reasoning capabilities, hallucination rates, safety metrics) and drive the optimization loop.Mentorship Best Practices: Mentor senior/junior engineers, conduct code reviews, and set high standards for code quality, MLOps practices, and GenAI engineering across the team.Production Excellence: Partner with DevOps/MLOps to ensure high availability and low latency for AI services serving massive concurrent traffic.Yêu cầu công việcExperience: 5+ years of professional experience in AI/ML/Software Engineering, with a strong track record in leading technical initiatives.Agentic AI Mastery: Deep hands-on experience in building AI Agents and Multi-Agent systems. Proficient in Agentic Design Patterns like Tool Calling, Planning and Reasoning, and frameworks such as LangChain, LangGraph, or Agents SDK.Advanced RAG Search: Expert knowledge of retrieval strategies, vector databases, and semantic search optimization.LLM Model Strategy: Strong capability in selecting and benchmarking Foundation Models (Open vs. Closed source) and applying fine-tuning/alignment (RLHF, DPO) strategies.System Evaluation: Experience implementing rigorous evaluation pipelines for Agentic AI (using Ragas, Langfuse, or custom metrics).Engineering Excellence: Proficient in Python, PyTorch, and modern Data/AI stacks. Experience in designing high-load distributed systems is a plus.Leadership Mindset: Ability to navigate ambiguity, drive technical consensus, and balance engineering perfection with product delivery speed.
About BTSE: BTSE Group is a global leader in fintech and blockchain technology, anchored by threecore business pillars: Exchange, Payments, and Infrastructure Development. Servingover 100 corporate clients worldwide, we provide white-label exchange and paymentsolutions. Our offerings encompass everything from exchange infrastructure hostingand development to custody, wallets, payments, blockchain integration, trading, andmore.We are looking for talented professionals in marketing, operations, customer support,and other departments. The roles offered may be on-site, remote, or hybrid, incollaboration with our local partner. About the opportunity: You own the AI core: model serving, the retrieval-augmented generation (RAG) pipeline, prompt engineering, and the feedback-to-training pipeline. In Phase 1, you make the base model perform as well as possible through context engineering — system prompts, few-shot exemplars, and retrieval optimisation — without modifying model weights. You also design the custom model training workflow so that enterprise clients can train their own fine-tuned models in Phase 2. This is the highest-leverage individual contributor role on the founding team.ResponsibilitiesDeploy and optimise a large language model for production inference: quantisation, continuous batching, low-latency serving. Build the RAG pipeline: document chunking, embedding generation, vector storage, cross-encoder reranking, and context assembly optimised for a 128K-token context window. Build the context layer: per-tenant system prompts, dynamically retrieved few-shot exemplars, task routing (classifying incoming requests to the right prompt configuration). Build defensive output parsing: structured JSON output from an unmodified base model with graceful fallbacks. Design and implement the feedback collection pipeline: capturing user corrections and ratings, automatically generating training data candidates for future fine-tuning. Design the custom model training workflow: tenant-scoped LoRA training on client-specific data, model evaluation, A/B testing, and isolated deployment. Monitor and improve inference quality: parsing failure rates, citation accuracy, hallucination rates, latency — all tracked per tenant. Iterate on prompts daily with the domain expert during the pilot phase.Requirements 5+ years ML engineering; 2+ years working with large language models in production. Hands-on experience with LLM serving frameworks (vLLM, TGI, or equivalent). Deep experience building RAG pipelines: chunking strategies, embedding models, vector databases, reranking. Strong prompt engineering skills for production applications — you know how to make a base model produce consistent, structured, high-quality output. Python: PyTorch, Transformers, FastAPI. Familiar with LoRA/QLoRA fine-tuning workflows. Nice to have Experience building multi-tenant ML serving infrastructure. Experience with financial or crypto AI applications. Experience with cross-encoder reranking models (DeBERTa or similar). Understanding of data isolation requirements for ML training pipelines. #LI-MC1
Negotiable
Không yêu cầu kinh nghiệm
Google welcomes people with disabilities.Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Zhubei, Zhubei City, Hsinchu County, Taiwan; New Taipei, Banqiao District, New Taipei City, Taiwan.Minimum qualifications: Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience. 8 years of experience with software programming languages (C/C++, Python) and application processor development. Experience with AI/ML workloads, including prefill, decode, and multimodal processing steps in LLM (Large Language Model). Preferred qualifications: Master's degree or PhD in Electrical Engineering, Computer Engineering or Computer Science, with an emphasis on computer architecture, next generation memory systems, or AI hardware accelerators. Experience with power and performance modeling and activity profiling using traces from power measurements and performance monitoring counters. Experience influencing silicon or memory roadmaps through high-fidelity performance projections of emerging technologies. Experience with ML frameworks (e.g., PyTorch, JAX, TensorFlow). Experience with SQL for data querying and analysis. About the jobBe part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's direct-to-consumer products. You'll contribute to the innovation behind products loved by millions worldwide. Your expertise will shape the next generation of hardware experiences, delivering unparalleled performance, efficiency, and integration. As a Senior System Architect within the Silicon team, you will work on GenAI use cases across hardware and software. You will be responsible for modeling and analyzing trade-offs for on-device vs. cloud AI execution of Gemini AI models. This role is critical in influencing the hardware and software roadmaps for SOC, AI accelerator, and new memory technologies.Google's mission is to organize the world's information and make it universally accessible and useful. Our team combines the best of Google AI, Software, and Hardware to create radically helpful experiences. We research, design, and develop new technologies and hardware to make computing faster, seamless, and more powerful. We aim to make people's lives better through technology.Responsibilities Model and estimate power and performance for next-generation SoC and memory technologies. Optimize hardware and software architectures for future GenAI use cases. Measure and compare on-device AI and cloud AI to provide guidance for Hybrid AI development. Support emerging technology initiatives with alignment across silicon process, IP design, Android OS, and Gemini model teams. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
About BTSE: 彼特思方舟 is a specialized service provider dedicated to delivering a full spectrum of front-office and back-office support solutions, each of which are tailored to the unique needs of global financial technology firms. 彼特思方舟 is engaged by BTSE Group to offer several key positions, enabling the delivery of cutting-edge technology and tailored solutions that meet the evolving demands of the fintech industry in a competitive global market. BTSE Group is a leading global fintech and blockchain company that is committed to building innovative technology and infrastructure. BTSE empowers businesses and corporate clients with the advanced tools they need to excel in a rapidly evolving and competitive market. BTSE has pioneered numerous trading technologies that have been widely adopted across the industry, setting new benchmarks for innovation, performance, and security in fintech. BTSE’s diverse business lines serve both retail (B2C) customers and institutional (B2B) clients, enabling them to launch, operate, and scale fintech businesses. BTSE is seeking ambitious, motivated professionals to join our B2C and B2B teams. About the opportunity: You own the AI core: model serving, the retrieval-augmented generation (RAG) pipeline, prompt engineering, and the feedback-to-training pipeline. In Phase 1, you make the base model perform as well as possible through context engineering — system prompts, few-shot exemplars, and retrieval optimisation — without modifying model weights. You also design the custom model training workflow so that enterprise clients can train their own fine-tuned models in Phase 2. This is the highest-leverage individual contributor role on the founding team.ResponsibilitiesDeploy and optimise a large language model for production inference: quantisation, continuous batching, low-latency serving. Build the RAG pipeline: document chunking, embedding generation, vector storage, cross-encoder reranking, and context assembly optimised for a 128K-token context window. Build the context layer: per-tenant system prompts, dynamically retrieved few-shot exemplars, task routing (classifying incoming requests to the right prompt configuration). Build defensive output parsing: structured JSON output from an unmodified base model with graceful fallbacks. Design and implement the feedback collection pipeline: capturing user corrections and ratings, automatically generating training data candidates for future fine-tuning. Design the custom model training workflow: tenant-scoped LoRA training on client-specific data, model evaluation, A/B testing, and isolated deployment. Monitor and improve inference quality: parsing failure rates, citation accuracy, hallucination rates, latency — all tracked per tenant. Iterate on prompts daily with the domain expert during the pilot phase.Requirements 5+ years ML engineering; 2+ years working with large language models in production. Hands-on experience with LLM serving frameworks (vLLM, TGI, or equivalent). Deep experience building RAG pipelines: chunking strategies, embedding models, vector databases, reranking. Strong prompt engineering skills for production applications — you know how to make a base model produce consistent, structured, high-quality output. Python: PyTorch, Transformers, FastAPI. Familiar with LoRA/QLoRA fine-tuning workflows. Nice to have Experience building multi-tenant ML serving infrastructure. Experience with financial or crypto AI applications. Experience with cross-encoder reranking models (DeBERTa or similar). Understanding of data isolation requirements for ML training pipelines. #LI-MC1
Negotiable
Không yêu cầu kinh nghiệm
Google will be prioritizing applicants who have a current right to work in Singapore, and do not require Google's sponsorship of a visa.Minimum qualifications: Bachelor's degree in Science, Technology, Engineering, Mathematics, or equivalent practical experience. 6 years of experience in project management and technical solution delivery. 4 years of experience in one or more of the following areas: data center infrastructure, networking, DevOps, security, compute, storage, Kubernetes, SRE, or AI/ML infrastructure (e.g., AI/ML frameworks like JAX, PyTorch, or OpenXLA). Experience writing Python code in a production environment. Preferred qualifications: Master’s degree in Computer Science, Technology, Business, Engineering or equivalent practical experience. Experience in one or more of the following specialized domains: advanced networking and storage such as system design of load balancers, firewalls, VPN, and production-grade storage. Experience with customer-facing migration, including service discovery, assessment, planning, execution, and operations. Experience developing, refactoring, and optimizing AI/ML models. About the jobThe Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners.As an Integration Engineer, you will work directly with Google’s most strategic customers on infrastructure projects that transform their business. You will provide consulting, program management, and technical expertise on customer engagements, while working with client executives and key technical leaders to deploy solutions on Google Cloud Platform. You will also work closely with key Google partners to deliver joint consulting services, providing technical guidance and infrastructure best practices.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.Responsibilities Work with customer technical leads, client executives, and partners to manage and deliver successful implementations of cloud solutions, and become a trusted advisor to decision makers throughout the engagement. Propose solution architectures and manage the deployment of cloud based distributed virtualized infrastructure solutions according to complex customer requirements and implementation best practices. Work with internal specialists, product, and engineering teams to package approaches, best practices, and implement lessons learned into thought leadership, methodologies, and published assets. Interact with sales, partners, and customer technical stakeholders to manage project scope, priorities, deliverables, risks and issues, and timelines for successful client outcomes. Travel 50% of the time to customer sites and facilities.  Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
Không yêu cầu kinh nghiệm
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers and portfolio managers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people. The Role We are seeking an exceptional senior-level Python engineer to join a small team working on complex data pipelines, AI/ML systems, and cutting-edge software solutions. This role will be responsible for managing technical objectives, providing technical leadership, and maintaining a hands-on approach to development. The ideal candidate will work closely across teams within WorldQuant as part of our business-facing technology organization. A successful candidate will possess deep expertise in Python development, data engineering, software architecture, and design principles. They should be able to mentor junior team members, conduct code reviews, and drive architectural decisions. Experience with AI and large language models (LLMs) is highly desirable. What You'll Bring Master's degree or higher in Computer Science, Engineering, or a related technical field from a top-tier institution. 7+ years of experience as a Python developer, with a strong focus on data engineering and AI/ML systems. Expert-level knowledge of Python and its ecosystem, including experience with data processing libraries like Pandas, NumPy, and PySpark. Proficiency in designing and implementing scalable, maintainable, and efficient data pipelines. Experience with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes). Expertise in version control systems (Git), CI/CD practices, and agile methodologies. Strong communication skills with the ability to explain complex technical concepts to both technical and non-technical stakeholders. Experience in the finance industry is a plus but not required. Experience with AI/ML frameworks such as PyTorch, or scikit-learn, LLM, agents or systems of agents is a significant plus. #LI-DN1By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright © 2025 WorldQuant, LLC. All Rights Reserved.WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
Negotiable
Không yêu cầu kinh nghiệm

Tìm kiếm việc làm Cake

Tham gia Cake ngay bây giờ! Tìm kiếm công việc hoàn hảo của bạn giữa hàng chục nghìn việc làm khác nhau.