Cake Job Search

Advanced filters
Off
Mid-Senior level
Logo of Pinkoi.
【關於 Pinkoi Data Squad】我們是 Pinkoi 的資料團隊,其目標為打造有效的數據平台,同時確保資料的品質與穩定性,讓資料在 Pinkoi 發揮最大的價值。Data Engineer 將與 Business Analyst/ML Engineer 緊密合作,透過數據探索問題、規劃後續的 Data Pipeline(資料管線) 實作,並提供 End-to-end 的 Data Solution,以擴展公司各部門的數據分析能力,並完善整體商業智慧系統。你需要負責的工作內容與產品、行銷、廣告等分析師與商業團隊合作,理解需求並規劃合適的 Data Pipeline。設計、實作與維護 ETL/ELT 流程,確保資料品質、穩定性與新鮮度。建立與優化資料模型,支援報表、分析與機器學習等多種場景。持續迭代並優化資料基礎建設,提升可擴展性與維運效率。建立監控與告警機制,確保 Pipeline 成功率與資料 SLA。我們希望你有的經驗跟特質對 Big Data 和 Data-driven engineering 有極大的熱情。瞭解 ETL 或 ELT 的流程,並具備 Data Modeling 跟開發 Data Pipeline 的能力。具備與產品 / 行銷團隊合作的經驗,並可與商業分析師、資料工程師,以及機器學習工程師協同合作。樂於學習最新的技術,隨時充實自我的專業技能。對大型語言模型(LLM)保持開放與好奇,願意學習並嘗試將其應用於資料工程工作(如:資料清理、欄位補全、文件生成、自動化測試),以探索新工具如何提升效率與使用者體驗。應徵條件數學 / 統計 / 資訊 / 資料科學相關科系畢業,或具備同等專業能力。一年以上數據分析或數據工程相關工作經驗。熟悉 Python、Scala、Java 等任何一種程式語言,Pinkoi 主要用 Python。熟悉 GNU/Linux 系統,Pinkoi 用 Ubuntu。熟悉關聯式資料庫的運用,例如 MySQL。加分條件熟悉廣告歸因邏輯,或具備相關數據處理經驗。具備使用者行為數據處理經驗。具有操作或架設 BI 工具的相關經驗。如 Superset、Tableau、Looker Studio,Pinkoi 使用 Superset 和 Redash。具有分散式運算的相關知識,例如 Spark、Hadoop、Hive 等一種或多種相關技術,Pinkoi 主要用 PySpark。熟悉雲端服務,例如 AWS Athena、S3、Glue,GCP BigQuery 等,Pinkoi 主要用 AWS。了解 dbt 的基礎架構和應用方式,如建立資料模型、撰寫測試、建立巨集等等。有將 LLM 應用於資料工程場景的經驗,例如規則抽取、欄位對齊與補全、文件/Data Catalog 生成、資料品質檢查等。#不用想了,趕快來當個 Pinkoi 人如果你有信心能夠勝任這份工作,歡迎提供你的個人履歷與小作業,即刻應徵!
Negotiable
No requirement for relevant working experience
Logo of Google.
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Gurugram, Haryana, India; Hyderabad, Telangana, India; Pune, Maharashtra, India.Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 6 years of experience in building Machine Learning or Data Science solutions. Experience with writing software in Python, Scala, R. Experience with data structures, algorithms, and software design. Ability to travel up to 30% of the time as required. Preferred qualifications: Experience in working with recommendation engines, data pipelines, or distributed machine learning with data analytics, data visualization techniques and software, and deep learning frameworks. Experience with Data Science techniques. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, Extract, Transform, and Load/Extract, Load and Transform (ETL/ELT) and reporting/investigative tools and environments. Knowledge of cloud computing including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent communication skills. About the jobThe Google Cloud team helps companies, schools, and government seamlessly make the switch to Google products and supports them along the way. You listen to the customer and swiftly problem-solve technical issues to show how our products can make businesses more productive, collaborative, and innovative. You work closely with a cross-functional team of web developers and systems administrators, not to mention a variety of both regional and international customers. Your relationships with customers are crucial in helping Google grow its Cloud business and helping companies around the world innovate. In this role, you will play a key role in understanding the needs of the customers and help shape the future of businesses of all sizes use of technology to connect with customers, employees and partners.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.Responsibilities Deliver big data and machine learning solutions and solve technical customer tests. Serve as a trusted technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmaps of Google Cloud Platform. Deliver recommendations, tutorials, blog articles, and technical presentations adapting to different levels of business and technical stakeholders. Drive Artificial Intelligence (AI) engagement with thought leadership. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Logo of Netskope Taiwan.
About the role The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments.
Scala
Golang
Python
2.4M+ TWD / year
12 years of experience required
No management responsibility
Logo of Cake Recruitment Consulting.
【公司描述】 本公司為專注於金融科技應用與數據驅動決策平台的創新企業,致力於打造以使用者為核心的智能數據服務與內容系統,服務範疇涵蓋數位媒體、財經分析平台與商業解決方案。 在大數據與AI應用上持續佈局,打造高效可擴展的數據基礎建設,目前正積極尋找資深大數據工程師,強化數據中台與即時運算能力。 【職缺亮點】 ✅ 核心團隊角色,主責數據中台建構與技術架構決策 ✅ 擁有技術主導權,直接領導跨部門資料工程任務 ✅ 有機會使用最新 Streaming 與分散式運算架構(Heron, Beam, Spark Streaming) ✅ 團隊正在擴張中,為未來技術領導者的絕佳舞台 【職位職責】 🔹 建構與維運企業級大數據平台,作為資料應用的核心骨幹 🔹 設計並最佳化分散式資料處理流程與 ETL 管線 🔹 建立可擴展的數據中台架構,支援內部產品與分析需求 🔹 領導並指導資料工程團隊,協助解決高效能與高併發運算需求 🔹 與資料科學、產品團隊合作,推動數據產品與實時運算架構 【職位要求】 ✔ 本科以上學歷(資訊、工程、數據相關科系尤佳) ✔ 精通 Scala 並熟悉 Hadoop 或其他 MapReduce 架構(必要條件) ✔ 熟悉大型數據倉儲架構與資料建模概念,有實務建置經驗 ✔ 熟悉 Apache Kafka 與即時串流處理技術,如 Beam、Spark Streaming、Heron ✔ 具備獨立解決問題能力與團隊技術領導經驗者佳 ✔ 對資料平台建構具高度興趣,重視資料品質與系統穩定性---------------------------------------------------------------------- 若您對此職位感興趣,請立即聯繫,讓我為您介紹更多詳細資訊! ▶︎ 聯繫人:招募顧問 Owen Wu▶︎ LINE: https://line.me/ti/p/ICjS8Jx0Dq▶︎ LinkedIn: https://www.linkedin.com/in/owen-wu-1023a41a4/
本地資料儲存與管理
Spark
Scala
1.5M ~ 2.5M TWD / year
3 years of experience required
Managing 1-5 staff
Logo of TecAlliance Vietnam Company Limited.
Job Definition Design and implementation of software functions following the agreed design patterns in an agile environmentFront- and backend DevelopmentPerforming code reviews and assistance with the further development of the continuous integration environment
Fluent in English
RESTful API Design and Development
database technologies
1 ~ 55M VND / month
1 years of experience required
Managing staff numbers: not specified
Logo of Agoda.
Get to Know our Team​In Agoda’s Back-End Engineering department, we build scalable, fault-tolerant systems and APIs that host our core business logic. Our systems cover all major areas of our business: inventory and pricing, product information, customer data, communications, partner data, booking systems, payments, and more. We employ state-of-the-art CI/CD and testing techniques to ensure everything works without downtime. Our systems are self-healing, responding gracefully to extreme loads or unexpected input. We use modern languages like Kotlin and Scala, Data technologies Kafka, Spark, MLflow, Kubeflow, VastStorage, StarRocks and agile development practices. Most importantly, we hire great people from around the world and empower them to be successful.​ ​The Opportunity​Agoda Platform team is looking for developers to work on mission-critical systems that serve millions of users daily. You will have the chance to work on innovative projects, using cutting-edge technologies, and make a significant impact on our business and the travel industry.​ Engineering Culture at Agoda TechOur engineering team works on cutting-edge technologies and innovative engineering culture to ensure we create a seamless user experience. We now serve over a million customers daily, offering the best deals on our inventory of 4M+ properties! Our users love us with a rating of 4.6/4.7 and have been awarded the Editor’s Choice in the Google Play store.At Agoda, we emphasize data-driven & agile development approaches. We believe technical excellence is the core agility across the company, which is why we continuously invest in the best engineering practices.
kỹ sư phần mềm
programista back-endowy
бэкенд-разработчик
Logo of Cake Recruitment Consulting.
🏢 公司介紹 我們的客戶是台灣最具規模與影響力的零售集團龍頭,積極拓展多元化的電商平台生態系。作為台灣零售業的標竿企業,該公司持續引領產業創新,近年積極推動全面數位轉型,整合線上線下(O2O)資源,打造台灣最大規模的全通路零售平台。 為什麼加入我們? 產業領導地位:台灣零售業龍頭,擁有穩健發展基礎與龐大數據資產 數位轉型先鋒:目前正全力投入集團電商整合與數位化升級,是參與零售科技革命的絕佳時機 絕佳職涯發展:在數位經濟時代,您的專業能力將直接對數百萬消費者與企業營收產生顯著影響 💼 工作內容 作為資深數據科學家,您將在集團客戶數據平台(CDP)團隊擔任核心角色,負責整合、分析並應用來自集團旗下多個電商平台與實體門市的海量消費者數據,打造AI驅動的個人化體驗,提升客戶忠誠度與營收。 核心職責: CDP與數據策略規劃 設計前瞻性的CDP資料模型,無縫整合來自網站、APP、社群與實體門市等多元數據源 建立驅動行銷自動化的數據架構,連結消費者全旅程體驗 定義關鍵績效指標(KPI)與目標(OKR),運用因果推斷建立用戶生命週期策略 確保跨平台數據一致性與品質,支援集團決策 AI模型與演算法開發 開發基於Transformer架構或圖神經網絡(GNN)的先進推薦系統 建立用戶分群、流失預測、終身價值(LTV)評估等關鍵模型 使用LightGBM、PyTorch等工具持續迭代優化模型效能與準確度 透過MLflow追蹤實驗結果,確保模型可擴展與可重現性 實驗設計與業務優化 規劃A/B測試或多臂老虎機(MAB)實驗,科學驗證行銷策略與產品功能有效性 深入分析實驗結果,持續迭代數據模型與演算法 以數據驅動方式提升用戶留存率、轉換率與整體營收 技術領導與跨部門協作 與工程團隊合作,透過Kubeflow優化MLOps流程,實現模型部署自動化 與行銷、產品、業務團隊密切協作,推動數據驅動的創新專案 向高階管理層展示AI演算法對業務的量化影響,有效推動決策 確保資料模型符合GDPR等國際隱私標準,強化數據安全與用戶隱私保護 💻 使用技術 您將有機會使用和精進以下尖端技術: 程式語言:Python、Java、Scala 大數據處理:Apache Spark、Apache Kafka、Airflow 數據存儲:Snowflake、Delta Lake、BigQuery 機器學習:PyTorch、LightGBM、Transformer架構、圖神經網絡(GNN) MLOps:MLflow、Kubeflow、Docker、Kubernetes 雲端服務:AWS、GCP、Azure CDP工具:企業級客戶數據平台架構與整合工具
CDP
Recommendation System
NLP
1.2M ~ 2M TWD / year
5 years of experience required
No management responsibility
Logo of Cake Recruitment Consulting.
【公司描述】 我們是一家全球領先的數字資產交易與支付科技公司,致力於打造安全、高效的金融基礎設施。 目前,我們正在尋找 (資深)數據工程師,負責獎勵與財務數據相關項目的開發與優化, 與國際團隊合作,共同構建高效、可擴展的數據平台。 【職位亮點】 ✅ 年薪 60-80K USD,具競爭力的薪資與成長空間 ✅ 國際化團隊,與來自世界各地的頂尖技術專家合作 ✅ 參與高併發、即時數據處理專案,發展大規模數據基礎設施 【職位職責】 🔹 數據平台與數據管道開發:設計並維護高效的數據基礎架構,確保數據流順暢與穩定 🔹 數據倉儲建置與優化:開發 離線與即時數據倉儲,確保數據高效存儲與檢索 🔹 大數據處理與分析:運用 Hadoop、Spark、Flink 進行數據計算與轉換 🔹 跨部門協作:與財務、獎勵與技術團隊合作,提供數據解決方案 【職位要求】 ✔ 3-8 年數據平台/數據管道工程經驗 ✔ 熟悉大數據技術:精通 Hadoop、Spark、Flink ✔ 具備程式開發能力:精通 Python / Scala / Java 至少一種語言 ✔ 熟悉數據倉儲概念,具有 離線 / 即時數據倉儲 開發經驗 ✔ 具備 Airflow 經驗 者佳 ✔ 流利的英文溝通能力,能與國際團隊無障礙協作---------------------------------------------------------------------- 若您對此職位感興趣,請立即聯繫,讓我為您介紹更多詳細資訊! ▶︎ 聯繫人:招募顧問 Owen Wu▶︎ LINE: https://line.me/ti/p/ICjS8Jx0Dq▶︎ LinkedIn: https://www.linkedin.com/in/owen-wu-1023a41a4/
Spark
Flink
Hadoop
60K ~ 80K USD / year
3 years of experience required
No management responsibility
Logo of Cake Recruitment Consulting.
🏢 公司介紹 我們的客戶是台灣最具規模與影響力的零售集團龍頭,積極拓展多元化的電商平台生態系。作為台灣零售業的標竿企業,該公司持續引領產業創新,近年積極推動全面數位轉型,整合線上線下(O2O)資源,打造台灣最大規模的全通路零售平台。 為什麼加入我們? 產業領導地位:台灣零售業龍頭,擁有穩健發展基礎與龐大數據資產 數位轉型先鋒:目前正全力投入集團電商整合與數位化升級,是參與零售科技革命的絕佳時機 絕佳職涯發展:在數位經濟時代,您的專業能力將直接對數百萬消費者與企業營收產生顯著影響 【工作內容】 你將成為企業數據基礎架構的核心工程師,協助打造能支撐數百萬筆用戶行為與交易資料的即時數據平台: CDP 數據架構設計:規劃資料模型、整合多元來源,提升行銷科技自動化能力 即時與批次資料處理:使用 Kafka、Flink 建構流處理系統;以 Airflow 調度高效批次流程 數據湖與查詢效能:應用 Delta Lake、Snowflake 提升查詢速度與資料整合效率 資料品質與監控:導入 data observability,確保資料一致性與可追蹤性 跨部門合作與技術領導:與行銷、產品、資料科學團隊密切合作,支援 MLOps 與模型部署 資料合規與治理:設計符合 GDPR、CCPA 的資料管理機制,落實企業資料安全策略 【使用技術】 程式語言:Python / Java / Scala 即時處理:Apache Kafka、Apache Flink 批次處理與排程:Apache Airflow 數據平台:Snowflake、Delta Lake、BigQuery 雲端平台:AWS / GCP / Azure 其他:Databricks、Kubernetes、Docker、CI/CD、MLOps
Kotlin
java
python
1.2M ~ 2M TWD / year
5 years of experience required
Managing 5-10 staff
Logo of Foreign Professional Talent Recruitment in Taiwan.
【公司描述】 我們是一家全球領先的數字資產交易與支付科技公司,致力於打造安全、高效的金融基礎設施。 目前,我們正在尋找資深數據工程師,負責獎勵與財務數據項目的開發與優化,與國際團隊合作,打造高效且可擴展的數據平台。 【Company Overview】 A global leader in digital asset exchange and payment technology, committed to building secure and efficient financial infrastructure. The company is now hiring a Senior Data Engineer to drive development and optimization of data systems for rewards and financial operations. This role involves close collaboration with international teams to build scalable and reliable data platforms. 【職位亮點】 ✅ 年薪 USD $60,000–80,000,具競爭力的薪資與成長空間 ✅ 國際化團隊,與全球頂尖技術專家合作 ✅ 參與高併發、即時數據處理專案,發展大規模數據基礎設施 【Role Highlights】 ✅ Competitive salary (USD $60K–80K) with strong career growth ✅ Join a global team and work with top engineers from around the world ✅ Build high-throughput, real-time data infrastructure at scale 【職位職責】 🔹 數據平台與管道開發:設計與維護穩定高效的數據架構 🔹 數據倉儲建置與優化:支援離線與即時數據的儲存與檢索 🔹 大數據處理與轉換:使用 Hadoop、Spark、Flink 等技術 🔹 跨部門協作:與財務、獎勵、技術團隊合作,提供數據解決方案 【Responsibilities】 🔹 Design and maintain stable, high-performance data platforms and pipelines 🔹 Develop and optimize offline and real-time data warehouses 🔹 Process and transform large data sets using Hadoop, Spark, or Flink 🔹 Collaborate cross-functionally with Finance, Rewards, and Engineering teams to deliver data-driven solutions 【職位要求】 ✔ 3–8 年數據平台或數據工程經驗 ✔ 精通 Hadoop、Spark、Flink 等大數據工具 ✔ 熟悉 Python / Scala / Java 至少一種程式語言 ✔ 熟悉數據倉儲與 ETL(離線與即時)開發經驗 ✔ 有 Airflow 經驗者佳 ✔ 英文流利,能與國際團隊順利溝通 【Requirements】 ✔ 3–8 years of experience in data platform or pipeline engineering ✔ Proficient in big data tools: Hadoop, Spark, Flink ✔ Strong programming skills in at least one of: Python, Scala, or Java ✔ Solid understanding of data warehousing and ETL (offline real-time) ✔ Experience with Airflow is a plus ✔ Fluent in English with strong communication skills for working across global teams---------------------------------------------------------------------- 若您對此職位感興趣,請立即聯繫,讓我為您介紹更多詳細資訊! ▶︎ 聯繫人:招募顧問 Owen Wu▶︎ LINE: https://line.me/ti/p/ICjS8Jx0Dq▶︎ LinkedIn: https://www.linkedin.com/in/owen-wu-1023a41a4/
Spark
Hadoop
Flink
60K ~ 80K USD / year
3 years of experience required
No management responsibility

Cake Job Search

Join Cake now! Search tens of thousands of job listings to find your perfect job.