Cake Job Search

Advanced filters
Off
About BTSE: BTSE Group is a global leader in fintech and blockchain technology, anchored by threecore business pillars: Exchange, Payments, and Infrastructure Development. Servingover 100 corporate clients worldwide, we provide white-label exchange and paymentsolutions. Our offerings encompass everything from exchange infrastructure hostingand development to custody, wallets, payments, blockchain integration, trading, andmore.We are looking for talented professionals in marketing, operations, customer support,and other departments. The roles offered may be on-site, remote, or hybrid, incollaboration with our local partner. About the opportunity: We're building an AI-powered research platform for institutional investors. Our platform turns vast amounts of market, alternative, and proprietary data into actionable intelligence — powered by AI agents that depend on clean, reliable, real-time data to do their job. We need someone to own data. Not manage a team that does data. Own it — from finding the right sources, to getting them flowing, to making sure they stay healthy at scale. Today we ingest from hundreds of sources. That number is growing fast. The sources are diverse: real-time market feeds, regulatory filings, on-chain blockchain data, news, social sentiment, alternative datasets, and proprietary client data. Some are free APIs. Some are $10K/month enterprise contracts. Some are clients pushing their own data into our platform. Every one of them is different, and most of them will break in ways you don't expect. You'll evaluate vendors, negotiate deals, build integrations, monitor quality, track costs, and make the call on what's worth paying for. When something breaks at 2 AM, you'll know why before the alert finishes firing. This is an end-to-end ownership role. No handoffs.ResponsibilitiesBuild and maintain integrations with a large and growing number of external data sources — APIs, WebSockets, file drops, streams, scrapers, and formats you haven't seen yet Evaluate and compare data vendors across quality, reliability, coverage, cost, and terms of service Negotiate contracts and manage commercial relationships with data providers Design and operate high-throughput ingestion pipelines handling mixed workloads (real-time, near-real-time, batch, event-driven) Build monitoring that tells you — before anyone else — when data is late, wrong, incomplete, or drifting Manage data quality at scale: anomaly detection, cross-source validation, schema drift detection, gap filling Handle both structured data (time-series, tabular) and unstructured data (documents, text, images) with appropriate extraction and storage Track costs per source, usage per consumer, and ROI — recommend what to keep, upgrade, or cancel Build tooling that makes adding the next data source faster than the last one Use AI tools aggressively in your daily work — for code generation, testing, documentation, anomaly analysis, and anything else that makes you fasterRequirements**You've done this before:**- 5+ years building data pipelines that run in production, 24/7, with real SLAs- Deep hands-on experience with SQL databases and time-series data- Python as your primary language, comfortable with async programming- You've integrated with dozens of external APIs and dealt with the reality of unreliable vendors, changing schemas, rate limits, and bad documentation- You've built monitoring and alerting for data systems — not as an afterthought but as part of how you work **You think about the whole picture:**- You don't just connect to an API. You think about what happens when it goes down, when the schema changes, when the data is wrong, when the bill doubles- You understand that data has a cost and a value, and not every source is worth keeping- You've worked with data vendors commercially — contracts, pricing tiers, usage negotiations **You use AI daily:**- AI coding tools are part of your workflow today, not something you're curious about- You can articulate specifically how AI makes you faster and where it doesn't help- You'd be frustrated if you couldn't use AI in your workNice to haveExperience with financial or crypto market data Experience with streaming systems (Kafka or similar) at scale Vector database or embedding pipeline experience Experience with unstructured data extraction (PDFs, documents, NLP)Perks BenefitsSenior individual contributor role with full ownership of the data domain Direct access to leadership — no bureaucracy, fast decisions AI tools provided and encouraged across all work Remote-friendly, async-first Compensation commensurate with experience#LI-MC1
Negotiable
No requirement for relevant working experience
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.The Role: We are seeking for a candidate to join our team as a Data Engineer. The Data Engineer will partner with our close-knit team of quantitative researchers, technologists and data sourcing colleagues to analyze and enrich a broad range of structured and unstructured large-scale data. Job responsibilities include, but not limited to the followings: Enriching a wide range of structured and unstructured data into datasets for quantitative analysis and financial engineering. Enhancing data quality integrity by developing validation tools to measure the effectiveness of data enrichment. Becoming a domain expert on different deep learning and machine earning applications, analyzing understanding the underlying dynamics and behaviors within the data. Develop insights based on the data and collaborate with the research team to generate signals. Developing the utility tools that can further automate the software development, testing and deployment workflow. What You’ll Bring: Strong academic background – minimum of a bachelor’s degree in a technical or quantitative field. Practical experience with and understanding of deep neural networks and other machine learning techniques. Demonstrated ability to implement data science pipelines and real-time applications in Python 3+ years of relevant experience as a Data Engineer or similar roles. Proficiency with python-based tools like Jupyter notebook, coding standards like pep8. Experience with LLM/AI for data processing is a plus Exceptional analytical problem-solving abilities, with a strong attention to detail. Good command of English Excellent software development skills: ability to convert rough overall use-cases to a working codebase.\ Motivated by a deep curiosity and passion to learn is a plus. Past experience as a data scientist or data engineer in finance or investment profile is a plus. Experience with Linux/Unix shell and Git is a plus. What We Offer: Competitive and attractive compensation package with clear career road-map – where you feel challenged everyday We offer a strong culture of learning and development: training courses, library, speakers, share and learn events Learn from who sits next to you! Working in WQ you are surrounded by smart and talented people Premium Health Insurance and Employee Assistance Program Generous time-off policy, re-creation sabbatical leave (based on tenure), Trade Union benefits for staff and family Team building activities every month: Local engagement events, Employee clubs: football, ping-pong, badminton, yoga, running, PS5, movies, etc. Annual company trip and occasional global conferences – opportunity to travel and connect with our global teams Happy-hour with tea break, snacks and meals every day in the office! #LI-NV1 By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright © 2025 WorldQuant, LLC. All Rights Reserved.WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
Negotiable
No requirement for relevant working experience
Minimum qualifications: Bachelor's degree or equivalent practical experience. 5 years of experience designing data pipelines, and dimensional data modeling for synch and asynch system integration and implementation using internal (e.g., Flume, etc.) and external stacks (DataFlow, Spark, etc.). 5 years of experience coding in one or more programming languages. 5 years of experience working with data infrastructure and data models by performing exploratory queries and scripts. Preferred qualifications: Master’s degree in a quantitative discipline (e.g., Computer Science, Engineering, Statistics, Math). Experience with data warehouses, large-scale distributed data platforms, and data lakes. Ability to navigate ambiguity in a fast-paced environment with multiple stakeholders. Excellent structured thinking skills, with the ability to break down complex, multi-dimensional problems. Excellent business and technical communication, organizational, and problem-solving skills. About the jobThe YouTube team helps budding creators build careers, artists and media companies reach audiences, and create products like YouTube Kids, YouTube Music, and YouTube TV. The YouTube Business Strategy and Operations team is responsible for driving all go-to-market functions for the YouTube business organization.As a Data Engineer within YouTube Analytics and Data Science, you will be part of a community of analytics professionals who work on impactful projects. You will build the data sets that help run the business, piping the relevant data into and out of our tools, and making it useful for analysts across the organization to drive reporting and insights. You will be responsible for democratizing YouTube’s business data, helping business leaders make sense of business operations through timely, accurate, and business intelligence. You will build and maintain the YouTube ETL systems to produce useful datasets, establish best practices for data sets and reporting, and develop a breadth of expertise in various data domains.At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together.Responsibilities Build and maintain data platforms to enable data reliability, data integrity, and data governance, enabling accurate, consistent, and trustworthy data sets. Conduct requirements gathering and project scoping sessions with subject matter experts, business users, and executive stakeholders to discover and define business data needs. Design, build, and optimize the data architecture and Extract, Transform, and Load (ETL) pipelines. Work closely with analysts to productionize and scale value-creating capabilities, including data integrations and transformations, model features, and statistical and machine learning models. Engage with the analyst community, understand critical user journeys and data sourcing inefficiencies, advocate best practices and lead analyst trainings. Write and review end-user and technical documents, including requirements and design documents for existing and future data systems, as well as data standards and policies. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
No requirement for relevant working experience
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people. The Role: As a LLM Data Engineer at WorldQuant, you will be at the heart of transforming unstructured text into actionable, high‑value insights that power quantitative investment strategies. This is a hands-on, engineering role where you’ll design, build, and scale the data pipelines that underpin our data research. This role is ideal for someone who loves building production systems, enjoys working deeply with text and large language models, and wants their engineering work to empower quantitative research at the firm. You’ll join a highly technical, collaborative environment where you work closely with Research and where your ideas can quickly translate into impact at scale. What You’ll Bring: Degree in a quantitative or technical discipline (computer science, engineering, etc.) Basic knowledge of probability and statistical theory At least 2 years’ experience in developing and managing scalable and robust data pipelines Have a deep understanding of machine learning, and experience in training, inference, and evaluation. Experience with LLM technologies and agentic systems Excellent Python coding skills; familiarity with pandas / polars / duckdb is required Strong analytical skills and attention to detail Excellent communication skills in both verbal and written form Ability to operate in a collaborative, team-oriented culture Proactive and mature approach to problem-solving and project management Experience with Kubernetes (k8s) for deployment is preferred Experience with monitoring and alerting tools is preferred Familiarity with infrastructure management and deployment and ML Ops is preferred What We Offer: Competitive and attractive compensation package with clear career road-map – where you feel challenged everyday We offer a strong culture of learning and development: training courses, library, speakers, share and learn events Learn from who sits next to you! Working in WQ you are surrounded by smart and talented people Premium Health Insurance and Employee Assistance Program Generous time-off policy, re-creation sabbatical leave (based on tenure), Trade Union benefits for staff and family Team building activities every month: Local engagement events – Employee clubs: football, ping-pong, badminton, yoga, running, PS5, movies, etc. Annual company trip and occasional global conferences – opportunity to travel and connect with our global teams Happy-hour with tea break, snacks and meals every day in the office! #LI-VP1By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright © 2025 WorldQuant, LLC. All Rights Reserved.WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
Negotiable
No requirement for relevant working experience
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people. The Role: As an NLP Data Engineer at WorldQuant, you will be at the heart of transforming unstructured text into actionable, high‑value insights that power quantitative investment strategies. This is a hands-on, engineering role where you’ll design, build, and scale the data pipelines that underpin our data research. This role is ideal for someone who loves building production systems, enjoys working deeply with text and large language models, and wants their engineering work to empower quantitative research at the firm. You’ll join a highly technical, collaborative environment where you work closely with Research and where your ideas can quickly translate into impact at scale. What You’ll Bring: BSc/M.Sc. from a leading university in Computer Science, Engineering, or related discipline 5 years of demonstrated experience programming scalable and robust software in Python Demonstrated experience building or maintaining data pipelines Basic knowledge of probability and statistical theory Experience working in Linux environments Experience with building and operating ML inference pipelines. Experience with using LLM for structured data extraction. Strong communication skills; ability to express complex concepts in simple terms Experience in the financial services industry is a big plus Knowledge of workflow scheduling techniques (e.g. Airflow) is a plus Prior experience working with text data in a data science/quantitative project environment What We Offer: Competitive and attractive compensation package with clear career road-map – where you feel challenged everyday We offer a strong culture of learning and development: training courses, library, speakers, share and learn events Learn from who sits next to you! Working in WQ you are surrounded by smart and talented people Premium Health Insurance and Employee Assistance Program Generous time-off policy, re-creation sabbatical leave (based on tenure), Trade Union benefits for staff and family Team building activities every month: Local engagement events – Employee clubs: football, ping-pong, badminton, yoga, running, PS5, movies, etc. Annual company trip and occasional global conferences – opportunity to travel and connect with our global teams Happy-hour with tea break, snacks and meals every day in the office! #LI-QM1By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright © 2025 WorldQuant, LLC. All Rights Reserved.WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
Negotiable
No requirement for relevant working experience
Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience. 3 year of experience with data processing software (e.g., Hadoop, Spark, Pig, Hive) and algorithms (e.g., MapReduce, Flume). Experience with database administration techniques or data engineering, and writing software in Java, C++, Python, Go, or JavaScript. Experience managing client-facing projects, troubleshooting technical issues, and working with Engineering and Sales Services teams. Preferred qualifications: Experience working with data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT, and reporting/analytic tools and environments. Experience working with Big Data, information retrieval, data mining, or machine learning. Experience in building applications with modern web technologies (e.g., NoSQL, MongoDB, SparkML, TensorFlow). Experience architecting, developing software, or production-grade Big Data solutions in virtualized environments. About the jobAs a Data Engineer for the Analytics, Insights and Measurement (AIM) team, you will Help customers, grow their businesses through trusted analytics, insights and measurement that ensure user privacy.Google Ads is helping power the open internet with the best technology that connects and creates value for people, publishers, advertisers, and Google. We’re made up of multiple teams, building Google’s Advertising products including search, display, shopping, travel and video advertising, as well as analytics. Our teams create trusted experiences between people and businesses with useful ads. We help grow businesses of all sizes from small businesses, to large brands, to YouTube creators, with effective advertiser tools that deliver measurable results. We also enable Google to engage with customers at scale.Responsibilities Create and deliver recommendations, tutorials, blog articles, sample code, and technical presentations, tailoring approach and messaging to varied levels of business and technical stakeholders. Design, develop, and maintain reliable data pipelines to collect, process, and store data from various data sources. Implement data quality checks and monitoring to ensure data accuracy and integrity. Collaborate with cross-functional teams (data science, engineering, product managers, sales and finance) to understand data requirements and deliver data solutions. Enhance data infrastructure for performance, efficiency to meet evolving business needs. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
No requirement for relevant working experience
Minimum qualifications: Bachelor's degree or equivalent practical experience. 3 years of experience in a data engineering, data infrastructure, or data analytics role. Experience with database administration techniques or data engineering, as well as writing software in Java, C++, Python, Go, or JavaScript. Preferred qualifications: Experience with data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT, and reporting/analytic tools and environments. Experience with data analysis, including statistics, and ML model development (data preparation, model selection, evaluation, tuning). Experience in scripting languages like Python for data manipulation, analysis, and automation. Ability to monitor, troubleshoot, and tune data systems and pipelines to improve efficiency. Ability to develop tools and systems to automate data processes, and increase overall efficiency, with proficiency in programming languages (e.g., SQL, Python), producing readable and well-structured code. Ability to deliver and maintain data projects from conception to production. About the job Google Play provides apps, games, and digital content services that bring Android devices to life. The Play Store serves over four billion users around the world, and is a critical driver of Google’s overall business growth. The Play Data Science and Analytics (DSA) team works on a variety of challenging data science projects to drive product and go-to-market (GTM) decisions for Play. The goal is to Power Play’s growth by building a deep understanding of the users and developers, enabling data-driven decision making, through strategic insights, thought leadership, and unified data foundations.In this role, you will design and build the next generation of the data infrastructure. You will be responsible for architecting, implementing, and optimizing complex, scalable data pipelines, moving beyond basic development to own key components of the data warehouse. This role requires a technical expert who can handle massive datasets, write highly efficient SQL and Python code, and collaborate effectively with executive stakeholders and engineers. You will build innovative data foundations and AI-driven insights solutions while helping to define the standards and best practices that elevate the entire team, driving data quality and AI-readiness initiatives. Responsibilities Design, build, and maintain scalable data pipelines to ingest, process, and store data from various sources. Implement data quality checks, and monitor to ensure accuracy and integrity. Write complex SQL queries for data extraction and transformation to enable ad-hoc analysis and automated reporting. Conduct quantitative analysis to support business decisions. Develop and manage scalable data foundations and models specifically designed to support AI/ML initiatives and AI-driven insights. Develop, test, and deploy intelligent agents using Python and the Google agent development kit (ADK) framework to automate tasks like data analysis and system orchestration. Partner with executive stakeholders and data scientists. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
No requirement for relevant working experience
Google will be prioritizing applicants who have a current right to work in Singapore, and do not require Google's sponsorship of a visa.Minimum qualifications: Bachelor's degree or equivalent practical experience. 5 years of experience coding in Python and SQL. 5 years of experience designing and deploying data pipelines, including managing data schemas and processing data volume workflows. 5 years of experience working with machine learning operations and data architecture design. Preferred qualifications: 5 years of experience designing enterprise-scale data platforms and analytics infrastructure. 5 years of experience with AI/LLM-based systems, agentic workflows, or intelligent data applications. 5 years of experience with using Gemini Command-line Interface, Cider Agents, LLM Extensions, Agent Development Kit (ADK) Agents etc. 3 years of experience partnering with stakeholders (e.g., users, partners, customer), and managing stakeholders or customers. Experience with Machine Learning for production workflows. Ability to operate across ambiguity and influence cross-functional technical decisions. About the jobAs a Technical Solutions Consultant, you will be responsible for the technical relationship of our largest advertising clients and/or product partners. You will lead cross-functional teams in Engineering, Sales and Product Management to leverage emerging technologies for our external clients/partners. From concept design and testing to data analysis and support, you will oversee the technical execution and business operations of Google's online advertising platforms and/or product partnerships.You will be able to balance business and partner needs with technical constraints, develop innovative, cutting edge solutions and act as a partner and consultant to those you are working with. You will also be able to build tools and automate products, oversee the technical execution and business operations of Google's partnerships, as well as develop product strategy and prioritize projects and resources. Whether it is paying online with Autofill, using tap and pay in stores, or using the Google Pay app, the Payments team at Google is focused on making payments simple, seamless, and secure. In addition to consumer payment technologies, the Payments team also powers the money movement between Google and its consumers and businesses.Responsibilities Design and build agentic analytics systems leveraging Large Language Models (LLMs) and AI to automate insight generation and workflows. Translate product needs into production-grade data systems (models, pipelines, and services). Lead the design and coordination of analytics infrastructure powering product analytics across Payments. Build AI-native analytics workflows, embedding intelligence directly into data and decision systems. Partner with Product, Data Science, and Engineering to shape problem definitions and deliver the solutions. Drive data quality, reliability, and governance standards across critical analytics systems. Build internal tools and frameworks to enable self-serve analytics and faster iteration for product teams. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
No requirement for relevant working experience
・與產品、風控、運營團隊協作,針對業務需求定義資料模型與監控指標・建置 ETL 流程,收集與清洗來自平台的行為數據(如下注記錄、轉碼、點擊行為)・開發與維護異常偵測模型(如洗碼對打、機器人行為、套利用戶)・利用機器學習或統計模型預測玩家留存、LTV、流失風險・設計風控策略,提升平台資金與行爲風險控制能力・定期產出分析報告,提出可行的產品或營運優化建議
Spark
Redshift
Python
50K ~ 120K TWD / month
3 years of experience required
Managing staff numbers: not specified
Logitech is the Sweet Spot for people who want their actions to have a positive global impact while having the flexibility to do it in their own way.The role and team:As an Audio Machine Learning Data Engineer on the Logitech Hardware Audio Machine Learning and DSP Product team, you will work on developing and managing our audio datasets and data pipelines. This work directly influences the innovative audio experiences we deliver to our customers.The Audio Machine Learning Data Engineers key responsibilities include:Data Pipeline Management: Ensuring the integrity and quality of Audio Machine Learning data pipelines and datasets, which involves robust data augmentation and managing workflows for supervised, unsupervised, and semi-supervised Machine Learning audio applications.Model Development and Deployment: Collaborating with the team to develop and deploy Audio Machine Learning models, specifically targeting platforms with strict resource limitations (such as Tensilica DSP, ARM, and RISC-V).Your Contribution:Be Yourself. Be Open. Stay Hungry and Humble. Collaborate. Challenge. Decide and just Do. Share our passion for Equality and the Environment. These are the behaviors and values you’ll need for success at Logitech. In this role you will:Design and manage audio data collection, curation, labeling, cleaning and augmentation pipelinesEvaluate and implement scalable data augmentation techniques.Establish and maintain high-quality, well-versioned, and documented datasets essential for training, validation, and benchmarking of audio Machine Learning models.Build automated tools for monitoring and ensuring the quality and statistical diversity of audio data.Formulate and execute strategies for continuous improvement of existing datasets.Key Qualifications:For consideration, you must bring the following minimum skills and experiences to our team:Audio Data Expertise: A minimum of 3 years of direct experience working with extensive audio datasets, including advanced data augmentation and preprocessing techniques audio Machine Learning.Python Proficiency: Strong proficiency in Python for both Machine Learning model development and automating data pipelines.Data Pipelines and Machine Learning Frameworks: Proven expertise in building scalable data pipelines and expertise in employing Machine Learning frameworks (TensorFlow, Keras) with large-scale, complex datasetsPreferred Qualifications:​Expert-level skills in audio analysis, including listening and artifact detection, with a proven track record of validating performance across diverse datasets.Strong familiarity with designing, executing, and statistically analyzing audio quality measurement protocols, specializing in managing data-driven objective and subjective evaluations.A strong data-first mindset, with a demonstrated ability to drive innovation both independently and as part of a team.Proficiency in C and SQL, along with experience using code version control systems (Git), is a valuable asset.Excellent cross-functional communication, documentation, and leadership skills, emphasizing transparency in data and results.Education:Bachelor’s or Master’s degree in Electrical Engineering, Computer Science, or a related discipline.Equivalent practical experience in professional audio Machine Learning and data engineering considered; advanced/relevant continuing education preferred.#LI-SL1Across Logitech we empower collaboration and foster play. We help teams collaborate/learn from anywhere, without compromising on productivity or continuity so it should be no surprise that most of our jobs are open to work from home from most locations. Our hybrid work model allows some employees to work remotely while others work on-premises. Within this structure, you may have teams or departments split between working remotely and working in-house.Logitech is an amazing place to work because it is full of authentic people who are inclusive by nature as well as by design. Being a global company, we value our diversity and celebrate all our differences. Don’t meet every single requirement? Not a problem. If you feel you are the right candidate for the opportunity, we strongly recommend that you apply. We want to meet you!We offer comprehensive and competitive benefits packages and working environments that are designed to be flexible and help you to care for yourself and your loved ones, now and in the future. We believe that good health means more than getting medical care when you need it. Logitech supports a culture that encourages individuals to achieve good physical, financial, emotional, intellectual and social wellbeing so we all can create, achieve and enjoy more and support our families. We can’t wait to tell you more about them being that there are too many to list here and they vary based on location.All qualified applicants will receive consideration for employment without regard to race, sex, age, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.If you require an accommodation to complete any part of the application process, are limited in the ability, are unable to access or use this online application process and need an alternative method for applying, you may contact us toll free at 1-510-713-4866 for assistance and we will get back to you as soon as possible.
Negotiable
No requirement for relevant working experience

Cake Job Search

Join Cake now! Search tens of thousands of job listings to find your perfect job.