Cake Job Search

Advanced filters
Off
Logo of COMMEET_擁樂數據服務股份有限公司.
COMMEET (https://commeet.co) 擴大徵才,歡迎有熱情之新夥伴加入我們的行列[工作內容]1 整理、標記數據,以確保數據的準確性與可靠性。 2 訓練OCR模型,並進行模型效能評估。3 建立/維護/優化資料處理流程與架構(data pipeline)。4 從業人員恪守客戶個資保護敏感度及尊重客戶資料隱密性。5 其它主管交辦事項。
松山區長安東路二段225號A棟1樓之1
松山區
台北
200 ~ 250 TWD / hour
No requirement for relevant working experience
No management responsibility
Logo of Pinkoi.
【關於 Pinkoi Data Squad】我們是 Pinkoi 的資料團隊,其目標為打造有效的數據平台,同時確保資料的品質與穩定性,讓資料在 Pinkoi 發揮最大的價值。Data Engineer 將與 Business Analyst/ML Engineer 緊密合作,透過數據探索問題、規劃後續的 Data Pipeline(資料管線) 實作,並提供 End-to-end 的 Data Solution,以擴展公司各部門的數據分析能力,並完善整體商業智慧系統。你需要負責的工作內容與產品、行銷、廣告等分析師與商業團隊合作,理解需求並規劃合適的 Data Pipeline。設計、實作與維護 ETL/ELT 流程,確保資料品質、穩定性與新鮮度。建立與優化資料模型,支援報表、分析與機器學習等多種場景。持續迭代並優化資料基礎建設,提升可擴展性與維運效率。建立監控與告警機制,確保 Pipeline 成功率與資料 SLA。我們希望你有的經驗跟特質對 Big DataData-driven engineering 有極大的熱情。瞭解 ETL 或 ELT 的流程,並具備 Data Modeling 跟開發 Data Pipeline 的能力。具備與產品 / 行銷團隊合作的經驗,並可與商業分析師、資料工程師,以及機器學習工程師協同合作。樂於學習最新的技術,隨時充實自我的專業技能。對大型語言模型(LLM)保持開放與好奇,願意學習並嘗試將其應用於資料工程工作(如:資料清理、欄位補全、文件生成、自動化測試),以探索新工具如何提升效率與使用者體驗。應徵條件數學 / 統計 / 資訊 / 資料科學相關科系畢業,或具備同等專業能力。一年以上數據分析或數據工程相關工作經驗。熟悉 Python、Scala、Java 等任何一種程式語言,Pinkoi 主要用 Python。熟悉 GNU/Linux 系統,Pinkoi 用 Ubuntu。熟悉關聯式資料庫的運用,例如 MySQL。加分條件熟悉廣告歸因邏輯,或具備相關數據處理經驗。具備使用者行為數據處理經驗。具有操作或架設 BI 工具的相關經驗。如 Superset、Tableau、Looker Studio,Pinkoi 使用 Superset 和 Redash。具有分散式運算的相關知識,例如 Spark、Hadoop、Hive 等一種或多種相關技術,Pinkoi 主要用 PySpark。熟悉雲端服務,例如 AWS Athena、S3、Glue,GCP BigQuery 等,Pinkoi 主要用 AWS。了解 dbt 的基礎架構和應用方式,如建立資料模型、撰寫測試、建立巨集等等。有將 LLM 應用於資料工程場景的經驗,例如規則抽取、欄位對齊與補全、文件/Data Catalog 生成、資料品質檢查等。#不用想了,趕快來當個 Pinkoi 人如果你有信心能夠勝任這份工作,歡迎提供你的個人履歷與小作業,即刻應徵!
800K ~ 1.5M TWD / year
No requirement for relevant working experience
Logo of 核桃運算股份有限公司.
我們正在尋找一位具備 3-5 年經驗的資料工程師,加入我們的數據團隊。您將負責設計、構建並優化高效率的 Data Pipeline,確保資料從來源端到分析端的準確性與即時性。 此職位不僅需要強大的 Python 與 ETL 開發能力,更需要您對 資料來源標準(如 CKAN, SDMX)有深入理解,能夠以「分析師的視角」來設計資料架構,確保產出的數據能精準支持商業分析與決策。[職務內容] 1. Data Pipeline 開發與優化 (Core Engineering) 使用 Python 與 SQL 開發高效、可擴展的 ETL/ELT 流程。 運用 Apache Airflow 進行工作流排程、自動化管理與相依性設計。 針對不同資料來源(API, DB, Flat Files)進行研究與串接,如 WorldBank等開放資料或統計資料格式。 2. 資料品質與系統監控 (Quality Observability) 建立資料品質檢測機制(Data Quality Checks),確保數據的完整性、一致性與正確性。 設計並維護系統監控機制(Monitoring),在 Pipeline 失敗或資料異常時能自動告警,並進行 Root Cause Analysis (RCA) 與修復。 3. 資料架構與模型設計 (Architecture Modeling) 理解下游商業分析與報表需求,設計合適的 Data Model(資料模型),確保資料易於提取與分析。 協助維護 Data Dictionary(資料字典)與文件,提升資料的可觀測性與團隊協作效率。 4. 跨部門協作與技術支援 (Collaboration) 與業務部門及 Data Analyst 溝通,轉化商業需求為技術規格,並支援 Presales 與 POC (Proof of Concept) 的技術驗證。 資料領域知識: - 具備處理 JSON, XML, CSV 等多種資料格式的經驗。 -(關鍵) 理解開放資料標準或統計數據交換格式(如 Open Data API 串接或 SDMX 結構)者優先考慮。 - 對資料分析方法(如 BI 工具、統計分析)有基本概念,知道如何將 Raw Data 轉化為適合分析的結構 (Ready-for-analysis data)。 - 軟實力:具備良好的邏輯思維,能獨立解決複雜的資料異常問題,並具備撰寫高品質技術文件的能力。 5. 其他主管交辦事項
Linux,Git,Python,Data Modeling,ETL,MySQL,數據分析,作業系統基本操作,資料備份與復原,資料庫系統管理維護,商業智慧,專案溝通╱整合管理
40K ~ 55K TWD / month
3 years of experience required
No management responsibility
Logo of 7-ELEVEN VIETNAM.
Salary:Negotiation Location:7-Eleven Head Office Application deadline:20/11 — 20/12/2025Headquarter Address: 7th Floor, Cobi Tower 2, No. 2-4, Street 8, Tan My Ward, Ho Chi Minh CityAs a Data Engineer Intern, you will work closely with our data engineering team to support and contribute to various data-related projects. Your responsibilities will include: Data Pipeline Development: Collaborate with the data engineering team to build, maintain, and optimize data pipelines for processing and transferring data from various sources to storage and analytical systemsData Integration: Assist in integrating data from different sources (e.g., databases, APIs, flat files) into our data ecosystem, ensuring data accuracy, quality, and consistencyDatabase Management: Work on creating and maintaining databases, ensuring data storage and retrieval efficiency, and implementing data security measuresData Transformation: Help transform and clean data to make it suitable for analysis, reporting. This may involve using ETL (Extract, Transform, Load) processes Research and Learning: Stay up-to-date with the latest data engineering technologies and best practices and actively participate in ongoing learning and skill development. More details will be discussed at the interview.We are cool Developing the Retail Management System helps 7-Eleven Vietnam to save cost and earn money such as Ecommerce Website, Loyalty Application, Supply Chain System, Warehouse Management System, Payment Gateway, POS, HHT,...Building a loyalty application with React NativeBuilding an ecommerce websiteModern tech stack (Google Cloud Platform, Docker, Kubernetes, Rancher) and tools (Google Chat, Jira) Catching up with new technologies through the process: experiment, trial, adopt In partnership with Grab, ZaloPay, Momo, VnPay, Airpay, ShopeeFood, GrabMart, GotIt, TAPTAP, Vani, Google for tech digital integrationsAgile at heart, everything is flexible and possibleFlat team, nice colleagues, no bossStrong engineering culture, read more of our stories on 7-Eleven Viet Nam Engineering Blog And some benefits Internship allowanceThe flexible working environment in terms of time, officeChances for training and development (learning out-door, training course, soft skills)
No requirement for relevant working experience
No management responsibility
Logo of WorldQuant.
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people. The Role: We are looking for an experienced Data Engineer who will help us to build and maintain an ecosystem for processing multiple datasets from different sources (both internal and external) vital to the firm’s investment operations. What You'll Do: Creating automated data processing system and monitoring/maintaining it Integrating multiple data sources and databases into one system Developing interfaces and micro services in Python Enriching company’s databy applying NLP and AI models Preprocessing and cleansing of semi-structured or unstructured data Developing efficient algorithms for data processing Testing and integrating external APIs Supporting Business Analysts team What You’ll Bring: A bachelor / master's degree in a technical or quantitative field from top university At least 3 years of experience as a data engineer or software developer Excellent programming skills Experience with data processing using Python Experience with building databases Experience with Containers and Kubernetes Scripting skills in UNIX environment: shell, python Fire, etc. Experience with code versioning tools (e.g. Git), issue tracking tools (e.g. Jira) Debugging skills, eye for detail and identifying problems Strong problem-solving skills and an analytical mindset A passion for working with data What We Offer: Competitive and attractive compensation package with clear career road-map – where you feel challenged everyday We offer a strong culture of learning and development: training courses, library, speakers, share and learn events Learn from who sits next to you! Working in WQ you are surrounded by smart and talented people Premium Health Insurance and Employee Assistance Program Generous time-off policy, re-creation sabbatical leave (based on tenure), Trade Union benefits for staff and family Team building activities every month: Local engagement events, monthly team lunch – Employee clubs: football, ping-pong, badminton, yoga, running, PS5, movies, etc. Annual company trip and occasional global conferences – opportunity to travel and connect with our global teams Happy-hour with tea break, snacks and meals every day in the office!#LI-VP1 By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright © 2025 WorldQuant, LLC. All Rights Reserved.WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
Negotiable
No requirement for relevant working experience
Logo of WorldQuant.
WorldQuant develops and deploys systematic financial strategies across a broad range of asset classes and global markets. We seek to produce high-quality predictive signals (alphas) through our proprietary research platform to employ financial strategies focused on market inefficiencies. Our teams work collaboratively to drive the production of alphas and financial strategies – the foundation of a balanced, global investment platform. WorldQuant is built on a culture that pairs academic sensibility with accountability for results. Employees are encouraged to think openly about problems, balancing intellectualism and practicality. Excellent ideas come from anyone, anywhere. Employees are encouraged to challenge conventional thinking and possess an attitude of continuous improvement. Our goal is to hire the best and the brightest. We value intellectual horsepower first and foremost, and people who demonstrate an outstanding talent. There is no roadmap to future success, so we need people who can help us build it.Technologists at WorldQuant research, design, code, test and deploy firmwide platforms and tooling while working collaboratively with researchers and portfolio managers. Our environment is relaxed yet intellectually driven. We seek people who think in code and are motivated by being around like-minded people. The Role: We are looking for an experienced Python developer who will help us to build and maintain an ecosystem for processing multiple datasets from different sources (both internal and external) vital to the firm’s investment operations. Key Responsibilities: Creating optimized and fault-tolerant data processing pipelines Developing interfaces and micro services in Python Developing efficient algorithms for data processing Testing and integrating external APIs Enriching company’s data Implementing data cleansing, transformation, and validation to ensure data integrity and quality Monitoring and maintaining the health and performance of webservices and databases Troubleshooting and resolving data processing issues What You’ll Bring: A bachelor / master's degree in a technical or quantitative field from top university At least 3 years of experience as a data engineer or software developer Excellent programming skills Experience with data processing using Python Experience with building databases Experience with Containers and Kubernetes Scripting skills in UNIX environment: shell, python Fire, etc. Experience with code versioning tools (e.g. Git), issue tracking tools (e.g. Jira) Debugging skills, eye for detail and identifying problems Strong problem-solving skills and an analytical mindset A passion for working with data What We Offer: Competitive compensation package Core benefits include: premium private health insurance and life insurance with savings plan Support for every aspect of life through Employee Assistance Program and fully covered sick leave Strong culture of learning and development: training courses, library, guest speakers, share and learn events, global conferences Regular offsite team buildings, annual conferences and occasional global summits – opportunity to travel and connect with our local and global teams #LI-MH1By submitting this application, you acknowledge and consent to terms of the WorldQuant Privacy Policy. The privacy policy offers an explanation of how and why your data will be collected, how it will be used and disclosed, how it will be retained and secured, and what legal rights are associated with that data (including the rights of access, correction, and deletion). The policy also describes legal and contractual limitations on these rights. The specific rights and obligations of individuals living and working in different areas may vary by jurisdiction. Copyright © 2025 WorldQuant, LLC. All Rights Reserved.WorldQuant is an equal opportunity employer and does not discriminate in hiring on the basis of race, color, creed, religion, sex, sexual orientation or preference, age, marital status, citizenship, national origin, disability, military status, genetic predisposition or carrier status, or any other protected characteristic as established by applicable law.
Negotiable
No requirement for relevant working experience
Logo of Google.
Minimum qualifications: Bachelor's degree or equivalent practical experience. 5 years of experience in designing data pipelines, and dimensional data modeling for synch and asynch system integration and implementation using internal (e.g., Flume, etc.) and external stacks (e.g., DataFlow, Spark, etc.). 5 years of experience with coding in one or more programming languages. 5 years of experience in working with data infrastructure and data models by performing exploratory queries and scripts. Preferred qualifications: Master’s degree in Computer Science, Engineering, Statistics, Math, or a related quantitative field. Experience with data warehouses and maintaining distributed data platforms, and data lakes. Ability to break down multi-dimensional problems. Ability to navigate ambiguity and work in a changing environment with multiple stakeholders. Excellent business and technical communication, organizational, and investigative skills. About the jobIn this role, you will be part of a community of analytics professionals who work on projects ranging from building and maintaining data platforms, developing data pipelines that help run the business, build tools to analyze the content partnerships and creator ecosystem. You will be responsible for democratizing YouTube’s business data, helping business leaders make sense of business operations through accurate business intelligence.At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together.Responsibilities Build and maintain data platforms to enable data reliability, data integrity, and data governance, enabling accurate and consistent data sets. Design, build, and optimize the data architecture and extract, transform, and load (ETL) pipelines. Conduct requirements gathering and project scoping sessions with subject matter experts, business users, and executive stakeholders to discover and define business data needs. Work with analysts to productionize capabilities, including data integrations and transformations, model features, and statistical and machine learning models. Write and review end-user and technical documents, including requirements and design documents for existing and future data systems, as well as data standards and policies. Engage with the analyst community, communicate with analysts to understand user journeys and data sourcing inefficiencies, advocate best practices and lead analyst trainings. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
No requirement for relevant working experience
Logo of Google.
Minimum qualifications: Bachelor’s degree in Computer Science, Engineering, Information Systems, a related quantitative field, or equivalent practical experience. 3 years of experience in a Data Engineering, Data Infrastructure, or Data Analytics role. Experience with data engineering and writing software in Python or SQL. Experience in managing and maintaining data projects from conception to production. Preferred qualifications: Experience in technical consulting. Experience in working with data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT, and reporting or investigative tools and environments. About the jobThe Data and Analytics Services (DAS) team is a data hub for the Platforms and Devices (PD) organization. Our mission is to empower the devices organization with timely, accurate, and actionable data. We build and manage the centralized data warehouse, creating data pipelines and scalable analytics solutions on Google Cloud Platform. By partnering closely with business stakeholders, we translate complex data needs into tangible technical solutions that drive decision-making, product strategy, and operational efficiency via data driven insights. We are committed to upholding data quality, governance, and best practices, ensuring that data is a reliable and transformative asset for all Platforms and Devices.As a Data Engineer in the DAS team, you will play a significant role in designing and building the next generation of our data infrastructure. You will be responsible for architecting, implementing, and optimizing complex and scalable data pipelines, moving beyond basic development to own key components of our data warehouse. With your technical expertise, you will handle massive datasets, write efficient SQL and Python code, and collaborate effectively with executive stakeholders and other engineers. You will not only build innovative data foundations, and AI-driven insights solutions, but also help define the standards and best practices that elevate the entire team, driving data quality and AI-readiness initiatives.The Platforms and Devices team encompasses Google's various computing software platforms across environments (desktop, mobile, applications), as well as our first party devices and services that combine the best of Google AI, software, and hardware. Teams across this area research, design, and develop new technologies to make our user's interaction with computing faster and more seamless, building innovative experiences for our users around the world.Responsibilities Lead the design, development, and maintenance of data pipelines and Extract, Transform, and Load/Extract, Load and Transform (ETL/ELT) processes for the centralized data warehouse. Architect and optimize SQL queries for data transformation, analytics, and reporting. Develop and manage data foundations and models specifically designed to support Artificial Intelligence/Machine Learning (AI/ML) initiatives and the generation of AI-motivated insights. Develop and maintain data infrastructure to support the data foundations. Partner with executive business stakeholders, data scientists, and Artificial Intelligence (AI) teams to understand requirements, architect data solutions. Collaborate with other data engineers to deliver data solutions and promote technical growth within the team. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
No requirement for relevant working experience
Logo of Logitech.
Logitech is the Sweet Spot for people who want their actions to have a positive global impact while having the flexibility to do it in their own way.The Team and Role:As an Audio ML Data Engineer on the Logitech Hardware Audio ML and DSP Product team, you will work on developing and managing our audio datasets and data pipelines. This work directly influences the innovative audio experiences we deliver to our customers.The Audio ML Data Engineers key responsibilities include:Data Pipeline Management: Ensuring the integrity and quality of Audio ML data pipelines and datasets, which involves robust data augmentation and managing workflows for supervised, unsupervised, and semi-supervised ML audio applications.Model Development and Deployment: Collaborating with the team to develop and deploy Audio ML models, specifically targeting platforms with strict resource limitations (such as Tensilica DSP, ARM, and RISC-V).Your Contribution:Be Yourself. Be Open. Stay Hungry and Humble. Collaborate. Challenge. Decide and just Do. Share our passion for Equality and the Environment. These are the behaviors and values you’ll need for success at Logitech. In this role you will:Design and manage audio data collection, curation, labeling, cleaning and augmentation pipelinesEvaluate and implement scalable data augmentation techniques.Establish and maintain high-quality, well-versioned, and documented datasets essential for training, validation, and benchmarking of audio ML models.Build automated tools for monitoring and ensuring the quality and statistical diversity of audio data.Formulate and execute strategies for continuous improvement of existing datasets.Key Qualifications:For consideration, you must bring the following minimum skills and experiences to our team:Audio Data Expertise: A minimum of 3 years of direct experience working with extensive audio datasets, including advanced data augmentation and preprocessing techniques audio ML.Python Proficiency: Strong proficiency in Python for both ML model development and automating data pipelines.Data Pipelines and ML Frameworks: Proven expertise in building scalable data pipelines and expertise in employing ML frameworks (TensorFlow, Keras) with large-scale, complex datasetsPreferred Qualifications:​Expert-level skills in audio analysis, including listening and artifact detection, with a proven track record of validating performance across diverse datasets.Strong familiarity with designing, executing, and statistically analyzing audio quality measurement protocols, specializing in managing data-driven objective and subjective evaluations.A strong data-first mindset, with a demonstrated ability to drive innovation both independently and as part of a team.Proficiency in C and SQL, along with experience using code version control systems (Git), is a valuable asset.Excellent cross-functional communication, documentation, and leadership skills, emphasizing transparency in data and results.Education:Bachelor’s or Master’s degree in Electrical Engineering, Computer Science, or a related discipline.Equivalent practical experience in professional audio ML and data engineering considered; advanced/relevant continuing education preferred.#LI-SL1Across Logitech we empower collaboration and foster play. We help teams collaborate/learn from anywhere, without compromising on productivity or continuity so it should be no surprise that most of our jobs are open to work from home from most locations. Our hybrid work model allows some employees to work remotely while others work on-premises. Within this structure, you may have teams or departments split between working remotely and working in-house.Logitech is an amazing place to work because it is full of authentic people who are inclusive by nature as well as by design. Being a global company, we value our diversity and celebrate all our differences. Don’t meet every single requirement? Not a problem. If you feel you are the right candidate for the opportunity, we strongly recommend that you apply. We want to meet you!We offer comprehensive and competitive benefits packages and working environments that are designed to be flexible and help you to care for yourself and your loved ones, now and in the future. We believe that good health means more than getting medical care when you need it. Logitech supports a culture that encourages individuals to achieve good physical, financial, emotional, intellectual and social wellbeing so we all can create, achieve and enjoy more and support our families. We can’t wait to tell you more about them being that there are too many to list here and they vary based on location.All qualified applicants will receive consideration for employment without regard to race, sex, age, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.If you require an accommodation to complete any part of the application process, are limited in the ability, are unable to access or use this online application process and need an alternative method for applying, you may contact us toll free at 1-510-713-4866 for assistance and we will get back to you as soon as possible.
Negotiable
No requirement for relevant working experience
Logo of Logitech.
Logitech is the Sweet Spot for people who want their actions to have a positive global impact while having the flexibility to do it in their own way.The Role:We are looking for a candidate to join our team who will be involved in the ongoing development of our Enterprise Data Warehouse (EDW) and supporting our POS [Point of Sale] Channel Data Management team. This role will include participating in the loading and extraction of data, including POS to and from the warehouse. The ideal candidate will be involved in all stages of the project lifecycle, from initial planning through to deployment in production. A key focus of the role will be data analysis, data modeling, and ensuring these aspects are successfully implemented in the production environment.  Your Contribution:Be Yourself. Be Open. Stay Hungry and Humble. Collaborate. Challenge. Decide and just Do. These are the behaviors you’ll need for success at Logitech. In this role you will: Design, Develop, document, and test ETL solutions using industry standard tools.Ability to design Physical and Reporting Data models for seamless cross-functional and cross-systems data reporting.Enhance point-of-sale datasets with additional data points to provide stakeholders with useful insights.Ensure data integrity by rigorously validating and reconciling data obtained from third-party providers.Collaborate with data providers and internal teams to address customer data discrepancies and enhance data quality.Work closely across our DI teams to deliver datasets optimized for consumption in reporting and visualization tools like TableauCollaborate with data architects, analysts, and business stakeholders to gather requirements and translate them into data solutions.​​​​Participate in the design discussion with enterprise architects and recommend design improvementsDevelop and maintain conceptual, logical, and physical data models with their corresponding metadata.Work closely with cross-functional teams to integrate data solutions.Create and maintain clear documentation for data processes, data models, and pipelines. Integrate Snowflake with various data sources and third-party tools.Manage code versioning and deployment of Snowflake objects using CI/CD practices Key Qualifications:For consideration, you must bring the following minimum skills and behaviors to our team:A total of 6 to 10 years of experience in ETL design, development, and populating data warehouses. This includes experience with heterogeneous OLTP sources such as Oracle R12 ERP systems and other cloud technologies.At least 3 years of hands-on experience with Pentaho Data Integration or similar ETL tools.Practical experience working with cloud-based Data Warehouses such as Snowflake and Redshift.Significant hands-on experience with Snowflake utilities, including SnowSQL, SnowPipe, Python, Tasks, Streams, Time Travel, Optimizer, Metadata Manager, data sharing, Snowflake AI/ML and stored procedures.Design and develop complex data pipelines and ETL workflows in Snowflake using advanced SQL, UDFs, UDTFs, and stored procedures (JavaScript/SQL).Comprehensive expertise in databases, data acquisition, ETL strategies, and the tools and technologies within Pentaho DI and Snowflake.Experience in working with complex SQL Functions Transformation of data on large data sets.Demonstrated experience in designing complex ETL processes for extracting data from various sources, including XML files, JSON, RDBMS, and flat files.Exposure to standard support ticket management tools.A strong understanding of Business Intelligence and Data warehousing concepts and methodologies.Extensive experience in data analysis and root cause analysis, along with proven problem-solving and analytical thinking capabilities.A solid understanding of software engineering principles and proficiency in working with Unix/Linux/Windows operating systems, version control, and office software.A deep understanding of data warehousing principles and cloud architecture, including SQL optimization techniques for building efficient and scalable data systems.Familiarity with Snowflake’s unique features, such as its multi-cluster architecture and shareable data capabilities.Excellent skills in writing and optimizing SQL queries to ensure high performance and data accuracy across all systems.The ability to troubleshoot and resolve data quality issues promptly, maintaining data integrity and reliability. Strong communication skills are essential for effective collaboration with both technical and non-technical teams to ensure a clear understanding of data engineering requirements. In addition, preferable skills and behaviors include:Exposure to Oracle ERP environment,Basic understanding of Reporting tools like OBIEE, Tableau Education:BS/BTech/MCA/MS in computer science  Information Systems or a related technical field or equivalent industry expertise.Fluency in English. Logitech is the sweet spot for people who are passionate about products, making a mark, and having fun doing it. As a company, we’re small and flexible enough for every person to take initiative and make things happen. But we’re big enough in our portfolio, and reach, for those actions to have a global impact. That’s a pretty sweet spot to be in and we’re always striving to keep it that way.  “All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.” Across Logitech we empower collaboration and foster play. We help teams collaborate/learn from anywhere, without compromising on productivity or continuity so it should be no surprise that most of our jobs are open to work from home from most locations. Our hybrid work model allows some employees to work remotely while others work on-premises. Within this structure, you may have teams or departments split between working remotely and working in-house.Logitech is an amazing place to work because it is full of authentic people who are inclusive by nature as well as by design. Being a global company, we value our diversity and celebrate all our differences. Don’t meet every single requirement? Not a problem. If you feel you are the right candidate for the opportunity, we strongly recommend that you apply. We want to meet you!We offer comprehensive and competitive benefits packages and working environments that are designed to be flexible and help you to care for yourself and your loved ones, now and in the future. We believe that good health means more than getting medical care when you need it. Logitech supports a culture that encourages individuals to achieve good physical, financial, emotional, intellectual and social wellbeing so we all can create, achieve and enjoy more and support our families. We can’t wait to tell you more about them being that there are too many to list here and they vary based on location.All qualified applicants will receive consideration for employment without regard to race, sex, age, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.If you require an accommodation to complete any part of the application process, are limited in the ability, are unable to access or use this online application process and need an alternative method for applying, you may contact us toll free at 1-510-713-4866 for assistance and we will get back to you as soon as possible.
Negotiable
No requirement for relevant working experience

Cake Job Search

Join Cake now! Search tens of thousands of job listings to find your perfect job.