Do you have experience in building data pipelines, data products and services at a global organisation scale? Do you consider data quality, data governance and self service data/tools as being essential to data management? Do you enjoy working on cutting edge technologies while supporting end-users achieving their business goals?
When you buy a sweater, toy or food product, there is a good chance that this product, or the factory that made it, has been inspected by QIMA. QIMA is a leading quality control and compliance provider that works with brands, distributors and importers around the world to secure, manage and optimise their global supply chain.
Founded in Hong Kong, QIMA today has 40 offices, more than 4,000 employees of 60 nationalities and operates in 85 countries. After operating for more than 10 years, importers from more than 120 countries use QIMA, and make us a leader in quality control service and technology.
In 2020 QIMA launched QimaOne, a collaborative platform that digitalizes quality and compliance management for international supply chains. It allows brands and distributors of consumer goods to connect their network of suppliers, to control and improve product quality, increase the visibility and transparency of their supply chain and reduce operational inefficiencies.
Data plays a key role at QIMA: we master the entire data flow, from data collection (with our own inspectors, auditors, and labs), data processing (BI, data scientists, marketing teams, DBAs) and data actionability (insights for our customers and for our managers). Data is spread around different departments and expertise inside the company.
Reporting to the Head of Data, the data engineer role will be a very transversal role. You’ll be in collaboration with different departments and teams across QIMA, and will need to be able to deep dive into data and technical topics, while also understanding the bigger picture, business needs and helping the QIMA data community grow.
The Data engineer is responsible to understand how our data can create business value for our customers or internal users, and to implement the corresponding data projects.
In this role your main responsibilities will be but not limited to:
Define, create and maintain data models, data pipelines, apis and services, workflows as well as using scheduling tools;
Support and maintain data engineering standards by driving collaborative reviews of design, code, tests and data publications performed by multiple teams across the organisation;
Ensure best practices across the organisation for data quality, data ingestion, data management and data publication and sharing;
Lead the data integration projects of newly acquired companies;
Collaborate to the implementation of company wide Master Data Management system;
Participate to the company wide effort of implementing data governance best practices and knowledge sharing across the data community;
Support data related needs of users across the organisation and train/onboard them to data tools.
In order to succeed in this role, you must have:
Solid academic background related to computer science or technical related degree;
More than 3 years of experience as a data professional;
Experience in data analysis, ETL/data pipelines, data warehousing and data modeling, as well as experience in data reporting/analysis;
Experience in writing SQL Queries, as well as in-depth understanding of data patterns and modelling;
Experience in using databases, data warehouses and cloud storage systems;
Experience in object oriented programming, writing clean and documented code (java or python);
Good communication skills and willing to train users on data products and services;
Be fluent in English.
Nice to have (bonus points):
Experience on ETLs and data orchestration services such as Matillion, Alteryx, Airflow, Argo, Meltano, dbt, Singer, Glue, Dataflow;
Experience in cloud data storage services, such as Snowflake, RDS, Redshift, S3, Big Query, Google cloud storage;
Experience in using Data dictionary tools such as Secoda or DataGalaxy;
Experience with DataOps, CI/CD solutions, Git (GitHub or GitLab), Terraform, Docker or Kubernetes;
Experience using cloud service providers such as AWS or GCP;
Experience on in relational SQL and NoSQL databases such as Oracle, Postgres, Mysql, MS SQL Server, MongoDB, Athena;
Experience in building apis and using services such as AWS Lambda, SNS/SQS, Google Cloud Functions, Pub/Sub, Kafka or Kinesis;
Understanding statistics, machine learning models and AI is a plus.
Head of Data Interview
CTO Director Interview
Do you believe you are a good fit?