Cari Lowongan

Advanced filters
Off
Logo of Cake Recruitment Consulting.
公司介紹 這是一家深耕台灣市場、具高度用戶滲透率的 FinTech 科技公司,長期處於高流量、高交易頻率的業務場景。產品服務已融入日常生活與金融行為,資料規模與複雜度持續成長。 目前公司正進入 數據基礎建設與治理升級的關鍵階段,高層明確將「數據驅動決策」視為下一階段成長核心,並投入資源打造更穩定、可擴展的資料平台,讓數據真正成為產品與營運的決策引擎。 這個角色將站在 公司級數據戰略中心,不只是管理團隊,而是實際參與並影響整體商業方向。 工作內容 帶領資料應用部門(資料工程、資料分析、BI / Data Science 團隊),管理約 5–10 位成員 規劃並推動 公司級 Data Platform(Data Lake / DWH / ETL / Streaming) 與工程與系統架構團隊協作,確保資料系統的 穩定性、可用性與擴展性 審視並優化資料流、事件系統、Schema 與整體資料架構設計 建立 資料治理、品質控管、權限與統計口徑制度 建構指標、Dashboard 與分析框架,支援 產品、營運、行銷與管理決策 推動 A/B Test、數據實驗與行為分析,讓決策有數據依據 作為跨部門橋樑,協調技術與商業需求,推動策略落地 使用的技術 Data Platform:Data Lake、Data Warehouse、ETL / ELT、Streaming Big Data / Pipeline:Spark、Kafka、Airflow Data Ops:Pipeline 監控、版本控管、CI/CD Cloud / Hybrid:BigQuery、Snowflake(地端為主、雲端為輔) BI / Analytics:指標設計、Dashboard、實驗分析
Spark
Snowflake
Kafka
2 jt ~ 3.5 jt TWD / tahun
Diperlukan pengalaman selama 10 tahun
Mengatur 5-10 staf
Logo of 統一超商股份有限公司(7-ELEVEN).
[本職缺僅接受104網站投遞] 請至統一超商104招募頁面投遞個人履歷表,路徑指引:https://www.104.com.tw/job/8p2a1?jobsource=google【工作內容】1. 設計資料模型與數據建模轉化 (Data Modeling):主導 數據架構規劃與 Medallion 模型設計,定義企業級 Golden Table 規範,確保模型具備高度擴展性以支援 AI/ML 應用。2. BI 報表與儀表板開發:根據專案需求,在 Databricks 上以 Python / SQL 建置視覺化儀表板,並主動優化報表讀取效能。3. CDP 數據維護:主導 CDP 核心邏輯設計,包含用戶 ID Mapping、行為標籤化及多通路歸因邏輯,建構單一用戶視角 (Single Customer View)。4. 數據品質監控:維護數據管道 (Pipeline) 的日常運作,執行數據驗證,確保報表數據一致性。5. 跨部門協作/角色:參與業務需求討論,將「商業問題」轉化為「數據需求」,並協助資料科學家準備模型訓練所需的特徵資料。6. 推動資料治理:制定並推動資料治理政策與標準,進行元數據管理/權限設計/血緣管理,確保資料資產的管理做法符合最佳實踐。【專業能力】【Required】1. Python:能運用物件導向 (OOP) 與 Functional Programming 設計模式編寫具備可擴展性、易於維護且可測試 (Testable) 的程式碼。2. SQL:熟悉 Join 邏輯、Window Functions 以及效能調優。3. 數據模型經驗:具備 Data Modeling (Star Schema / Snowflake Schema) 實務設計經驗。4. Databricks / Spark 實務:Databricks 平台操作與實務經驗,能在 Notebook 或 Workflows 中管理資料流程,並深度理解資料分層(Bronze/Silver/Gold)的意義,能根據資料成熟度將資料放置在正確的資料分層,並從中運用 PySpark 或 Spark SQL 處理大規模資料轉換。5. 自動化調度:具備至少一種 排程編排工具(Databricks Workflows 或 Airflow)的基礎經驗,能編排並優化複雜的任務排程。6. 視覺化工具:具備至少一種 BI 工具經驗(Databricks Dashboard / Power BI / Tableau / Looker),具備基礎的視覺化設計美感,知道如何清楚呈現 KPI。【加分條件】【Nice to have】1. 零售或電商領域知識:了解零售/電商常見指標(如:AOV、轉化率、歸因模型)。2. 數據管理經驗:曾接觸元數據管理與數據治理相關經驗。3. 雲端經驗:具備至少一種 AWS / GCP / Azure 雲端平台操作基礎。4. CICD:具備 Databricks Asset Bundle / GitHub Actions 經驗,能將數據管道的部署自動化,並導入數據品質監控。▲uniopen團隊在做什麼?https://blog.104.com.tw/the-innovative-integration-of-the-uniopen-team/
50 rb ~ 120 rb TWD / bulan
Diperlukan pengalaman selama 5 tahun
Tidak ada tanggung jawab manajemen
Logo of 街口電子支付股份有限公司.
Data Director 帶領資料應用部,包含資料分析團隊與資料工程團隊,需兼具資料工程底層能力與分析決策能力,以建立 Data Platform基礎,使公司邁向更穩定的資料基礎建設與數據驅動成長。你將負責公司級資料平台、治理與分析應用,能深入工程細節、推動資料架構優化,同時協助業務端以數據做決策。【主要職責】1. 資料平台與架構-規劃並維護 Data Lake / DWH / ETL / Streaming。-與工程/架構團隊協作,確保資料系統高可用、可擴展。-評估資料流、schema、事件系統並提出最佳化方案。2. 資料工程與管線管理-監督資料清洗、轉換、整合流程,確保品質與效能。-熟悉 Spark、Kafka、Airflow 等技術,能審視設計合理性。-建立 Data Ops 機制(pipeline 監控、版本控制、CI/CD)。3. 資料分析與決策支持-建立指標、儀表板、BI 工具,支援產品/營運/行銷決策。-推動 A/B test、數據實驗與行為分析模型。-確保分析方法與統計口徑一致。4. 資料治理與跨部門推動-建立資料定義、權限、品質管理制度。-協調跨團隊需求,確保策略與技術執行一致。5. 團隊管理-管理 Data Eng / Analyst / Scientist / BI 團隊。-制定團隊能力矩陣、技術標準與發展路線。
Negotiable
Diperlukan pengalaman selama 10 tahun
Mengatur 5-10 staf
Logo of MoMo.
At MoMo, we are not just processing transactions; we are shaping the future of finance in Vietnam.As a Data Analyst Trainee in the Corporate Data Office (CDO), this role is an opportunity for you to learn by doing.You will be guided by experienced data professionals, explore how analytics supports real business decisions, and gradually gain hands-on experience in building data products—from raw data to insights and reusable data assets—used across MoMo’s products and internal platforms.Mô tả công việcLearn and work with semantic / metrics layers (e.g. semantic models, metrics definitions, dimensions) to support consistency across dashboards and analyses.Build and maintain automated dashboards to monitor key performance metrics, with guidance from the team.Analyze datasets to generate insights that support business and product decision making.Develop an understanding of the “Data as a Product” mindset, contributing to data solutions that are reliable, well-documented, and reusable.Gain hands-on experience with workflow orchestration tools such as Airflow, n8n, or similar platforms.Collaborate with internal teams to understand business needs and support the delivery of data solutions.Support cross-functional projects by contributing analytical insights and data foundations.Yêu cầu công việcFinal-year student or fresh graduate in Data, Computer Science, Statistics, Economics, or related fieldsStrong analytical mindset to support data-driven decision makingFast learner with high learning agility, eager to pick up new data tools and conceptsData product mindset: build reliable, reusable datasets beyond one-off reportsBasic proficiency in SQL and comfort working with large datasetsClear communication skills (English Vietnamese) and ability to collaborate cross-functionally
Tidak ada persyaratan pengalaman kerja terkait
Logo of MoMo.
The MoMo Recommendation Platform is a complex system that powers personalized experiences for millions of users using a diverse range of technologies.We’re looking for a Senior Software Engineer with strong system thinking, architecture design skills, and a product mindset to help build the MLOps platform that transforms any AI/ML solutions into production-grade systems at scale.Mô tả công việcThink like a product engineer: you don’t just “code a solution” – you build a platform that empowers others to deliver intelligent sysDesign and develop a flexible platform that turns AI/ML solutions into production-ready systems: microservices, batch pipelines, or real-time APIsBuild infrastructure to support:Model training pipelinesPackaging deploymentServing rolloutMonitoring alertingCollaborate closely with Data Scientists, Business, and Product teams to deeply understand requirements and design adaptable, scalable solutionsIntegrate platform components into MoMo’s broader infrastructure: promotion engine, A/B testing, analytics, real-time scoring, etc.Yêu cầu công việcMust-Have5+ years of experience in software development, system architecture, or backend/platform engineeringProficiency in one or more of the following: Python, Bash, C++, JavaScript, Java, or GoStrong problem-solving skills and teamwork spiritExperience with:Platform Deployment: Kubernetes, Helm, Argo CD, Argo Rollouts, Docker, Google Cloud Platform (GCP) or Amazon Web Services (AWS)Serving APIs: FastAPI, gRPC, MLflow, KServe, custom logic services, REST APIsData Messaging: BigQuery, Redis, MongoDB, PostgreSQL, Oracle, MySQL, Kafka, Pub/SubOrchestration Workflow: Airflow, Argo WorkflowsCI/CD Monitoring: GitHub Actions, Prometheus, GrafanaData Sources: App event streams, relational databases, messaging systems, APIsSolid understanding of distributed systems and cloud-native architectureAbility to design systems that support diverse solution typesPlatform mindset: you build for stability, scalability, and long-term maintainabilityStrong communication and collaboration skills – able to work cross-functionally with Data Scientists, DevOps, and Product teamsNice-to-HaveExperience working with both AI/MLExperience scaling low-latency / real-time systemsFamiliarity with A/B testing, canary release, and shadow deployment strategiesProduct-oriented mindset: you build systems that others can easily adopt and extend
Tidak ada persyaratan pengalaman kerja terkait
Logo of MoMo.
MoMo is the market leader in mobile payments in Vietnam, driven by a commitment to enhancing the lives of Vietnamese citizens through technological innovation.Within the MoMo BigData AI department, we prioritize Smart, Efficient, and Excellent execution. We are currently undergoing a major transformation to build a new hybrid data platform spanning multiple cloud vendors (GCP AWS).We are seeking an experienced Data Engineer to help us architect this platform to optimize for both budget control and technological flexibility. You will play a pivotal role in shifting our mindset from "managing data" to creating valuable Data Products that empower our internal consumers.Mô tả công việcWith MoMo's AI-first mission, we are designing and building a self-serve data platform to empower both internal teams and external partners. This platform allocates resources based on users’ needs to support:Ingesting data from diverse sources — either in batch or streaming, using both pull and push mechanismsDeveloping and deploying resilient data pipelines across the data lake, data warehouse, and streaming systemsDelivering high-quality, derived datasets to downstream tools such as BI solutions (e.g., Apache Superset,Looker Data Studio), via multiple delivery methods including APIs, datasets, and streaming dataMonitoring data quality throughout all data pipelines in the platform to ensure high-quality data, resulting in better decision-making, accurate reporting, and reliable machine learning outputsTracking and optimising resource usage for efficiencyAdditionally, we are building Data Management Systems that enable the Data Governance team and data consumers to:Manage the full data lifecycle within the big data platformExplore the MoMo data ecosystem independentlyProvide a single source of truth with high data quality to downstream consumersTrack and manage infrastructure costs across major projects, teams, and departmentsYêu cầu công việcThe MindsetPassion for Data: You dream in SQL ("SELECT COUNT(SHEEP)...") and care deeply about data accuracy.Product Thinking: You view data as a product, focusing on the usability and reliability of what you deliver to stakeholders.The Tech StackStrong Coding Skills: Proficiency in Java/Kotlin (for robust backend services) and Python (for data processing/scripting).Hybrid Cloud Infrastructure: Hands-on experience with GCP. Proficiency in Kubernetes, Docker, and IaC tools like Pulumi or Terraform.Big Data Engines: Deep understanding of computing engines like Spark, Trino, BigQuery, and Clickhouse.Orchestration: Experience building DAGs and workflows in Airflow or Temporal.Data Sources: Familiarity with diverse sources including App Events, CDC from transactional DBs (Oracle, MySQL, MSSQL), and streaming systems (Kafka, PubSub).Soft SkillsStrong problem-solving abilities with a focus on root-cause analysis.Collaborative spirit: You can explain complex infrastructure decisions to non-technical stakeholders.
Tidak ada persyaratan pengalaman kerja terkait
Logo of Google.
Google welcomes people with disabilities.Minimum qualifications: Bachelor's degree or equivalent practical experience. 5 years of experience using Python (Pandas, NumPy) or Java to develop data processing tools or automation scripts. Experience in managing data workflows using tools like Airflow, dbt, or Prefect. Experience in building and querying data within BigQuery, Snowflake, or Redshift environments. Experience in developing operational dashboards using Looker, Tableau, or Power BI. Preferred qualifications: Experience working with semiconductor manufacturing data or large-scale industrial datasets. Ability to manage complex data exchanges and integration workflows with external foundry or assembly partners. Ability to identify manufacturing anomalies and to explain architectures to non-technical stakeholders. About the jobA problem isn’t truly solved until it’s solved for all. That’s why Googlers build products that help create opportunities for everyone, whether down the street or across the globe. As a Program Manager at Google, you’ll lead complex, multi-disciplinary projects from start to finish — working with stakeholders to plan requirements, manage project schedules, identify risks, and communicate clearly with cross-functional partners across the company. Your projects will often span offices, time zones, and hemispheres. It's your job to coordinate the players and keep them up to date on progress and deadlines. As a Machine Learning Data and Analytics Engineer, you will be the architect of manufacturing intelligence. You will design, build, and maintain the data infrastructure that transforms fragmented information from the global partners into a cohesive, high-performance data ecosystem. Your work will directly enable the operations team to monitor production health, optimize yields, and make data-driven decisions in real-time.The Data Center team designs and operates some of the most sophisticated electrical and HVAC systems in the world. We are an upbeat, creative, team-oriented group of engineers committed to building and operating powerful data centers.Responsibilities Design and deploy scalable pipelines to manage high-volume manufacturing data, including wafer maps, test results, and quality reports. Build automated tools to clean and normalize disparate data formats from foundry and assembly partners, ensuring a single source of truth. Create and maintain intuitive visualizations and high-impact dashboards to monitor critical KPIs and production health metrics. Develop and optimize data schemas that support high-speed ingestion and complex investigative querying for real-time decision-making. Partner with Operations and Engineering teams to translate business requirements into technical solutions while ensuring platform reliability and performance. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Negotiable
Tidak ada persyaratan pengalaman kerja terkait
Logo of 展旺數位有限公司.
・與產品、風控、運營團隊協作,針對業務需求定義資料模型與監控指標・建置 ETL 流程,收集與清洗來自平台的行為數據(如下注記錄、轉碼、點擊行為)・開發與維護異常偵測模型(如洗碼對打、機器人行為、套利用戶)・利用機器學習或統計模型預測玩家留存、LTV、流失風險・設計風控策略,提升平台資金與行爲風險控制能力・定期產出分析報告,提出可行的產品或營運優化建議
Spark
Redshift
Python
50 rb ~ 120 rb TWD / bulan
Diperlukan pengalaman selama 3 tahun
Jumlah staf yang diatur: tidak diketahui
Logo of OpenNet 開網有限公司.
- Set up and execute extract, transformation, and load (ETL) functions to build a data pipeline.- Extraction and analysis of large data sets from MySQL.- Performance tuning for current and newly added queries to the databases, ensuring the database resources are fully utilized.- Delivering clear analysis and reporting of core business metrics to shareholders.- Creation and management of reports and dashboards.- Data management.- Enhance and optimize existing reporting processes.- Ad hoc analysis and reporting to clients and shareholders.- Aid in reconfiguring existing architecture and database structure to address our shareholder's evolving needs.- Daily maintenance and monitoring of all BI-related databases and dashboards, including proficient handling of emergencies.- Providing actionable insight to drive the growth of core products.Our Stack MySQL (Must)Python (Good to have)AirFlow (Good to have)AWS (Good to have)Metabase (Good to have)Redshift (Good to have)Linux (Good to have)
910 rb ~ 1.8 jt TWD / tahun
Diperlukan pengalaman selama 3 tahun
Tidak ada tanggung jawab manajemen
Logo of 新加坡商鈦坦科技.
【Job Responsibilities】 ・Own the design, scalability, and reliability of the company’s data platform and pipelines ・Define and implement best practices for data modeling, orchestration, and data warehouse architecture. ・Work closely with Product, data scientists, and Infra teams to define data strategy. ・Initiate the adoption of MLOps practices: design and set up the framework for deploying, monitoring, and scaling ML/DL models in production. 【Skills】 ■ Must-have ・Proficient in SQL and Python. ・Proven experience in designing and maintaining large-scale data pipelines and data warehouses (GCP preferred). ・Strong knowledge of schema design, structured/unstructured data handling, and performance optimization. ・Solid understanding of CI/CD and version control (Git). ・Familiarity with Unix/Linux environments. ・Experience with workflow orchestration tools (e.g., Airflow, dbt, Prefect). ・Experience in defining architecture and technical direction for data platforms. ■ Nice-to-have ・Hands-on experience with MLOps practices and tools (e.g., MLflow, Kubeflow). ・Exposure to real-time/streaming data pipelines (e.g., Kafka, Pub/Sub). ・Experience with Kubernetes for scalable data and ML workloads.
Python
MS SQL
Git
70 rb+ TWD / bulan
Diperlukan pengalaman selama 3 tahun
Tidak ada tanggung jawab manajemen

Cari Kerja di Cake

Gabung di Cake sekarang! Cari puluhan ribu lowongan kerja untuk mendapatkan pekerjaan idaman.