Senior/Expert AI Data Engineer

    Core Responsibilities

    • Infrastructure & Platform: Lead the evolution of our Data Platform, focusing on scalability, reliability, and cost-optimization.

    • Data Pipeline Development: Design and deploy sophisticated batch and streaming pipelines using Spark and Airflow.

    • Framework Architecture: Build robust libraries and frameworks for ingestion, transformation, and governance following clean architecture principles.

    • Data Governance: Partner with Data Architects to enforce data contracts, manage schema governance, and design scalable data models.

    • Lakehouse Optimization: Enhance performance in Delta, Iceberg, or Hudi environments through advanced partitioning, caching, and metadata management.

    • DevOps & Security: Implement CI/CD workflows, Infrastructure as Code (Terraform), and Kubernetes orchestration while ensuring strict adherence to IAM, encryption, and global data privacy standards (GDPR/CCPA).

    • AI & LLM Integration: Architect end-to-end ML/LLM pipelines, including RAG (Retrieval-Augmented Generation) frameworks, vector search, and embedding pipelines.

    Technical Requirements

    • Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field.

    • Experience: 5+ years of professional experience in Data or Software Engineering.

    • Cloud Proficiency: Hands-on experience with major cloud providers (AWS, Azure, or GCP).

    • Programming: High proficiency in Python, Scala, or Java.

    • Systems Design: Strong background in complex, fault-tolerant distributed systems and multi-threading.

    • Big Data Stack: Expert knowledge of the Spark framework (SQL, DataFrames) and distributed computing/storage.

    • Collaboration: Excellent communication skills, a team-oriented mindset, and an open-minded approach to learning.

    Preferred Qualifications (Nice to Have)

    • Lakehouse Tech: Experience with Databricks (Delta Lake, Unity Catalog, Delta Live Tables).

    • Optimization: Proven track record in performance tuning for Big Data workloads (Spark/Flink).

    • Modern Tooling: Familiarity with dbt (data build tool).

    • Advanced AI: Practical experience with LLMOps, prompt engineering, and vector databases like ChromaDB.

    HOW TO APPLY: Please send your CV to the consultant in charge: 
    Ms. Tu Anh Duong
    Email: anh.duong@ev-search.com 


    All applications will be considered without regard to race, color, religion, sex (inclusing pregnancy and fender identity), national origion, political affiliation, sexual orientation, mariatal status, disability, genetic information, age, membership in an employee organization, parental status, military service or other nonmerit factor.

    Interested in this position?

    Get in touch with us now!

    Quick Apply
    Email